slick insert query with forceInsertQuery - scala

I need to copy table to another same schema table.
I would like to do something like
insert into table1 select * from table2
In slick, it seems possible to insert with queries.
There is a function with signature .insert(:Query)
In my table I defined a "id" column with auto-increment option.
However slick automatically omit auto-increment column except using forceInsert method.
In this case, column number doesn't match if I print the sql out:
val table = TableQuery[Table_X]
println(TableQuery[Table_Y].insertStatementFor( table.take(1000) ))
insert statement is lack of an "id" column, but table.take(1000) include it.
How can I solve this problem?
I see some functions called forceInsertQuery in the source code of slick on github. I am not sure whether this can help me or not

Related

Postgres: Can I bypass the error "cannot insert into generated column" using a PostgreSQL INSTEAD OF INSERT rule?

I know this isn't pretty but it would be helpful to bypass the error for insert into a generated column in Postgres. Let's say, we have a table like so:
create table testing (
id int primary key,
fullname_enc bytea,
fullname text generated always as (pgp_sym_decrypt(fullname_enc, 'key')) stored
);
A query like the following returns the expected error: ERROR: cannot insert into column "fullname" DETAIL: Column "fullname" is a generated column.
insert into testing(id, fullname) values (3, 'John Doe');
I want to create a rule on this table on INSERTs like:
create rule encrypter as on insert to testing DO INSTEAD insert into testing (id, fullname_enc) values (new.id, pgp_sym_encrypt(new.fullname, 'key'));
Since we rewrite the query, I was naively thinking if this would not result in the error from the engine but it still does. Any idea how this could be achieved?
The reason for asking this is migration to PostgreSQL 12.
This cannot be achieved, and if it could be achieved somehow, that would be a bug that needs to be fixed. Otherwise, restoring from a dump would change the values.
I think that what you need is a BEFORE trigger that sets fullname.
I hope that this is a mock example and not something that is intended to improve security.

Workaround in Redshift for "ADD COLUMN IF NOT EXISTS"

I'm trying to execute an S3 copy operation via Spark-Redshift and I'm looking to modify the Redshift table structure before running the copy command in order to add any missing columns (they should be all VARCHAR).
What I'm able to do is send an SQL query before running the copy, so ideally I would have liked to ALTER TABLE ADD COLUMN IF NOT EXISTS column_name VARCHAR(256). Unfortunately, Redshift does not offer support for ADD COLUMN IF NOT EXISTS, so I'm currently looking for a workaround.
I've tried to query the pg_table_def table to check for the existence of the column, and that works, but I'm not sure how to chain that with an ALTER TABLE statement. Here's the current state of my query, I'm open to any suggestions for accomplishing the above.
select
case when count(*) < 1 then ALTER TABLE tbl { ADD COLUMN 'test_col' VARCHAR(256) }
else 'ok'
end
from pg_table_def where schemaname = 'schema' and tablename = 'tbl' and pg_table_def.column = 'test_col'
Also, I've seen this question: Redshift: add column if not exists, however the accepted answer isn't mentioning how to actually achieve this.

Using postgres currval() in jooq

I am working with postgres database and Java. I am using Jooq to query my database.
I need to make an insert in my table and get the primary_key/sequence generated by that insert. I know in simple postgres i can do it like this:
This is what my table looks like:
CREATE TABLE "myTable" (
"id" SERIAL NOT NULL,
"some_text" TEXT NOT NULL,
PRIMARY KEY ("id")
);
This is the insert query:
INSERT INTO public.myTable(some_text)
VALUES ('myValue');
and than to get the latest sequence,
SELECT currval('myTableName_myColumnName_seq')
FROM myTable;
1) How can I use currval in JOOQ?
Right now I am attempting something like this:
config.dsl().insertInto(Tables.myTable)
.set(Tables.myTable.myText, inputText)
.execute();
config.dsl().select.currval('myTableName_myColumnName_seq')
.from myTable;
but off-course the last statement gives error.
The problem with your solution is that while you inserting a record to your table, there might be another process that gets value from a sequence and you'll get wrong value with your second query (SELECT currval).
PostgreSQL allows you to get some data back in INSERT statement with RETURNING clause:
INSERT INTO public.myTable(some_text)
VALUES ('myValue')
RETURNING id;
As jOOQ manual states, you should use returning and fetch in this case. I'm not sure about proper usage (I'm not familiar with jOOQ), something like following:
config.dsl().insertInto(Tables.myTable)
.set(Tables.myTable.myText, inputText)
.returning(Tables.myTable.id)
.fetch();
You can get the current value of a sequence through Sequence.currval(), which returns an expression for that purpose. E.g.
dsl().select(MYTABLENAME_MYCOLUMNNAME_SEQ.currval()).from(...)
But since this sequence is auto-generated from a SERIAL which produces sequence values automatically on your insertions, I completely agree with icuken's answer, you should use INSERT .. RETURNING instead.

PostgreSQL bulk insert with ActiveRecord

I've a lot of records that are originally from MySQL. I massaged the data so it will be successfully inserted into PostgreSQL using ActiveRecord. This I can easily do with insertions on row basis i.e one row at a time. This is very slow I want to do bulk insert but this fails if any of the rows contains invalid data. Is there anyway I can achieve bulk insert and only the invalid rows failing instead of the whole bulk?
COPY
When using SQL COPY for bulk insert (or its equivalent \copy in the psql client), failure is not an option. COPY cannot skip illegal lines. You have to match your input format to the table you import to.
If data itself (not decorators) is violating your table definition, there are ways to make this a lot more tolerant though. For instance: create a temporary staging table with all columns of type text. COPY to it, then fix offending rows with SQL commands before converting to the actual data type and inserting into the actual target table.
Consider this related answer:
How to bulk insert only new rows in PostreSQL
Or this more advanced case:
"ERROR: extra data after last expected column" when using PostgreSQL COPY
If NULL values are offending, remove the NOT NULL constraint from your target table temporarily. Fix the rows after COPY, then reinstate the constraint. Or take the route with the staging table, if you cannot afford to soften your rules temporarily.
Sample code:
ALTER TABLE tbl ALTER COLUMN col DROP NOT NULL;
COPY ...
-- repair, like ..
-- UPDATE tbl SET col = 0 WHERE col IS NULL;
ALTER TABLE tbl ALTER COLUMN col SET NOT NULL;
Or you just fix the source table. COPY tells you the number of the offending line. Use an editor of your preference and fix it, then retry. I like to use vim for that.
INSERT
For an INSERT (like commented) the check for NULL values is trivial:
To skip a row with a NULL value:
INSERT INTO (col1, ...
SELECT col1, ...
WHERE col1 IS NOT NULL
To insert sth. else instead of a NULL value (empty string in my example):
INSERT INTO (col1, ...
SELECT COALESCE(col1, ''), ...
A common work-around for this is to import the data into a TEMPORARY or UNLOGGED table with no constraints and, where data in the input is sufficiently bogus, text typed columns.
You can then do INSERT INTO ... SELECT queries against the data to populate the real table with a big query that cleans up the data during import. You can use a lot of CASE statements for this. The idea is to transform the data in one pass.
You might be able to do many of the fixes in Ruby as you read the data in, then push the data to PostgreSQL using COPY ... FROM STDIN. This is possible with Ruby's Pg gem, see eg https://bitbucket.org/ged/ruby-pg/src/tip/sample/copyfrom.rb .
For more complicated cases, look at Pentaho Kettle or Talend Studio ETL tools.

Is there a way to quickly duplicate record in T-SQL?

I need to duplicate selected rows with all the fields exactly same except ID ident int which is added automatically by SQL.
What is the best way to duplicate/clone record or records (up to 50)?
Is there any T-SQL functionality in MS SQL 2008 or do I need to select insert in stored procedures ?
The only way to accomplish what you want is by using Insert statements which enumerate every column except the identity column.
You can of course select multiple rows to be duplicated by using a Select statement in your Insert statements. However, I would assume that this will violate your business key (your other unique constraint on the table other than the surrogate key which you have right?) and require some other column to be altered as well.
Insert MyTable( ...
Select ...
From MyTable
Where ....
If it is a pure copy (minus the ID field) then the following will work (replace 'NameOfExistingTable' with the table you want to duplicate the rows from and optionally use the Where clause to limit the data that you wish to duplicate):
SELECT *
INTO #TempImportRowsTable
FROM (
SELECT *
FROM [NameOfExistingTable]
-- WHERE ID = 1
) AS createTable
-- If needed make other alterations to the temp table here
ALTER TABLE #TempImportRowsTable DROP COLUMN Id
INSERT INTO [NameOfExistingTable]
SELECT * FROM #TempImportRowsTable
DROP TABLE #TempImportRowsTable
If you're able to check the duplication condition as rows are inserted, you could put an INSERT trigger on the table. This would allow you to check the columns as they are inserted instead of having to select over the entire table.