Invoke-SqlCmd for migrations script fails on columns that will be introduced - azure-devops

I'm trying to get database migrations into my release pipeline definition, roughly following this aproach - dotnet ef migrations script --idempotent as part of the build pipeline, and then an Invoke-SqlCmd task pointing to the resulting script as part of the release pipeline.
However, I have made one change from that blog post, that might be of importance: in order to avoid corrupt states where half a migration is deployed, I wrap the entire script in a transaction, so that it is effectively
SET XACT_ABORT ON
BEGIN TRANSACTION
-- output of dotnet ef migrations script --idempotent
COMMIT
However, when I try to run the script as part of my release pipeline, in an Azure SQL Database task just as the one he uses in the blog post, it fails on e.g. references to things that don't exist in the schema yet, but will by the point the script reaches the part that includes the reference.
For example, considering the following migrations script:
SET XACT_ABORT ON
BEGIN TRANSACTION
-- in the actual script, each of these statements are, individually, wrapped in an IF that
-- checks whether the migration has been run before or not, similar to the first one in
-- this listing
IF NOT EXISTS (SELECT * FROM [__EFMigrationsHistory] WHERE MigrationId = 'AddColumnBar')
BEGIN
ALTER TABLE Foo ADD Bar int NULL
END
GO
UPDATE Foo SET Bar = 3
GO
ALTER TABLE Foo ALTER COLUMN Bar int NOT NULL
GO
COMMIT
Executing this script with Invoke-SqlCmd as suggested in the blog post yields an error stating that the Foo column does not exist.
How do I tell Invoke-SqlCmd that I know it doesn't, but it will when it needs to? If that's not possible, what's the best approach to fix this deployment flow for Azure DevOps?
Additional info:
Here's the complete log from the SQL step in the release pipeline:
2019-01-09T14:57:52.7983184Z ##[section]Starting: Apply EF Migrations
2019-01-09T14:57:52.7989024Z ==============================================================================
2019-01-09T14:57:52.7989311Z Task : Azure SQL Database Deployment
2019-01-09T14:57:52.7989427Z Description : Deploy Azure SQL DB using DACPAC or run scripts using SQLCMD
2019-01-09T14:57:52.7989514Z Version : 1.2.9
2019-01-09T14:57:52.7989608Z Author : Microsoft Corporation
2019-01-09T14:57:52.7989703Z Help : [More Information](https://aka.ms/sqlazuredeployreadme)
2019-01-09T14:57:52.7989823Z ==============================================================================
2019-01-09T14:57:58.8012013Z Sql file: D:\a\r1\a\_db-migrations\migrations-with-transaction.sql
2019-01-09T14:57:58.8189093Z Invoke-Sqlcmd -ServerInstance "***" -Database "***" -Username "***" -Password ****** -Inputfile "D:\a\r1\a\db-migrations\migrations-with-transaction.sql" -ConnectionTimeout 120
2019-01-09T14:58:04.3140758Z ##[error]Invalid column name 'Group_DocumentTagGroupId'.Check out how to troubleshoot failures at https://aka.ms/sqlazuredeployreadme#troubleshooting-
2019-01-09T14:58:04.3480044Z ##[section]Finishing: Apply EF Migrations
Here are all the references to Group_DocumentTagGroupId in the script:
-- starting around line 1740
IF NOT EXISTS(SELECT * FROM [__EFMigrationsHistory] WHERE [MigrationId] = N'20181026122735_AddEntityForDocumentTagGroup')
BEGIN
ALTER TABLE [DocumentTags] ADD [Group_DocumentTagGroupId] bigint NOT NULL DEFAULT 0;
END;
GO
-- about 20 lines with no mention of the column (but several GO statements)
IF NOT EXISTS(SELECT * FROM [__EFMigrationsHistory] WHERE [MigrationId] = N'20181026122735_AddEntityForDocumentTagGroup')
BEGIN
EXEC('
UPDATE [dbo].[DocumentTags] SET [Group_DocumentTagGroupId] = (SELECT TOP 1 [DocumentTagGroupId] FROM [dbo].[DocumentTagGroups] g WHERE g.[Name] = [Type])')
END;
GO
IF NOT EXISTS(SELECT * FROM [__EFMigrationsHistory] WHERE [MigrationId] = N'20181026122735_AddEntityForDocumentTagGroup')
BEGIN
CREATE INDEX [IX_DocumentTags_Group_DocumentTagGroupId] ON [DocumentTags] ([Group_DocumentTagGroupId]);
END;
GO
IF NOT EXISTS(SELECT * FROM [__EFMigrationsHistory] WHERE [MigrationId] = N'20181026122735_AddEntityForDocumentTagGroup')
BEGIN
ALTER TABLE [DocumentTags] ADD CONSTRAINT [FK_DocumentTags_DocumentTagGroups_Group_DocumentTagGroupId] FOREIGN KEY ([Group_DocumentTagGroupId]) REFERENCES [DocumentTagGroups] ([DocumentTagGroupId]) ON DELETE CASCADE;
END;
GO
The SQL file that is sent to Invoke-SqlCmd is generated with the following PowerShell script
$migrationsWithoutTransaction = "./Migrations/scripts/migrations-without-transaction.sql"
dotnet ef migrations script --configuration Release --idempotent --output $migrationsWithoutTransaction
$migrationsWithTransaction = "./Migrations/scripts/migrations-with-transaction.sql"
Get-Content "./Migrations/begin-transaction.sql" | Out-File -Encoding Utf8 $migrationsWithTransaction
Get-Content $migrationsWithoutTransaction | Out-File -Encoding Utf8 -Append $migrationsWithTransaction
Get-Content "./Migrations/commit-transaction.sql" | Out-File -Encoding Utf8 -Append $migrationsWithTransaction
where this is the contents of the auxiliary sql scripts:
-- Migrations/begin-transaction.sql
SET XACT_ABORT ON
BEGIN TRANSACTION
-- Migrations/end-transaction.sql
COMMIT

Is using Azure Release pipeline (classic) an option for you? Because I am successfully running EF migrations on it, while creating the migrations script on the fly.
See my answer here: how to execute sql script using azure devops pipeline

Related

PostgreSQL pg_dump creates sql script, but it is not a sql script: is there a way to get pg_dump to create a standard sql script?

I'm running pg_dump to create a script to automate the creation of a system like this:
pg_dump --dbname=postgresql://postgres:ohdsi#127.0.0.1:5432/OHDSI -t webapi.* > webapi.sql
This creates a sql script, but it is not really a sql script as it has code in it like what is shown below.
When this script is run as a sql script, it fails giving the error shown below.
Is there a way to get pg_dump to create a sql script that is standard sql and can be executed as a sql script?
Code sample from sql generated by pg_dump:
COPY webapi.cohort_version (asset_id, comment, description, version, asset_json, archived, created_by_id, created_date) FROM stdin;
\.
--
-- Data for Name: concept_of_interest; Type: TABLE DATA; Schema: webapi; Owner: ohdsi_admin_user
--
COPY webapi.concept_of_interest (id, concept_id, concept_of_interest_id) FROM stdin;
1 4329847 4185932
2 4329847 77670
3 192671 4247120
4 192671 201340
Error seen when running the script generated by pg_dump:
--
-- Name: penelope_laertes_uni_pivot id; Type: DEFAULT; Schema: webapi; Owner: ohdsi_admin_user
--
ALTER TABLE ONLY webapi.penelope_laertes_uni_pivot ALTER COLUMN id SET DEFAULT nextval('webapi.penelope_laertes_uni_pivot_id_seq'::regclass)
--
-- Data for Name: achilles_cache; Type: TABLE DATA; Schema: webapi; Owner: ohdsi_admin_user
--
COPY webapi.achilles_cache (id, source_id, cache_name, cache) FROM stdin
Error executing: COPY webapi.achilles_cache (id, source_id, cache_name, cache) FROM stdin
. Cause: org.postgresql.util.PSQLException: ERROR: COPY from stdin failed: COPY commands are only supported using the CopyManager API.
Where: COPY achilles_cache, line 1
Exception in thread "main" java.lang.RuntimeException: org.apache.ibatis.jdbc.RuntimeSqlException: Error executing: COPY webapi.achilles_cache (id, source_id, cache_name, cache) FROM stdin
. Cause: org.postgresql.util.PSQLException: ERROR: COPY from stdin failed: COPY commands are only supported using the CopyManager API.
Where: COPY achilles_cache, line 1
at org.yaorma.database.Database.executeSqlScript(Database.java:344)
at org.yaorma.database.Database.executeSqlScript(Database.java:332)
at org.nachc.tools.fhirtoomop.tools.build.postgres.build.A04_CreateAtlasWebApiTables.exec(A04_CreateAtlasWebApiTables.java:29)
at org.nachc.tools.fhirtoomop.tools.build.postgres.build.A04_CreateAtlasWebApiTables.main(A04_CreateAtlasWebApiTables.java:19)
Caused by: org.apache.ibatis.jdbc.RuntimeSqlException: Error executing: COPY webapi.achilles_cache (id, source_id, cache_name, cache) FROM stdin
. Cause: org.postgresql.util.PSQLException: ERROR: COPY from stdin failed: COPY commands are only supported using the CopyManager API.
Where: COPY achilles_cache, line 1
at org.apache.ibatis.jdbc.ScriptRunner.executeLineByLine(ScriptRunner.java:109)
at org.apache.ibatis.jdbc.ScriptRunner.runScript(ScriptRunner.java:71)
at org.yaorma.database.Database.executeSqlScript(Database.java:342)
... 3 more
Caused by: org.postgresql.util.PSQLException: ERROR: COPY from stdin failed: COPY commands are only supported using the CopyManager API.
Where: COPY achilles_cache, line 1
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2675)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2365)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:355)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:329)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:315)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:291)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:286)
at org.apache.ibatis.jdbc.ScriptRunner.executeStatement(ScriptRunner.java:190)
at org.apache.ibatis.jdbc.ScriptRunner.handleLine(ScriptRunner.java:165)
at org.apache.ibatis.jdbc.ScriptRunner.executeLineByLine(ScriptRunner.java:102)
... 5 more
--- EDIT ------------------------------------
The --inserts method in the accepted answer gave me exactly what I needed.
I ended up doing this:
pg_dump --inserts --dbname=postgresql://postgres:ohdsi#127.0.0.1:5432/OHDSI -t webapi.* > webapi.sql
The client tool you are using to restore the dump cannot deal with the data from the (nonstandard) COPY command being mixed into the script. You need psql to restore such a dump.
You can use the --inserts option of pg_dump to create a dump that contains INSERT statements rather than COPY. That will be slower to restore, but will work with more client tools.
However, your wish to get a standard SQL script is hopeless. PostgreSQL extends the standard in many ways, so a database cannot be dumped with a standard SQL script. Note, for example, that indexes are not defined by the SQL standard. If you are looking to transfer a PostgreSQL dump to a different RDBMS, you will be disappointed. That is more difficult.

Migrate down in golang migrate does not drop tables

I have the following down script named 000001_init_schema.down.sql
DROP TABLE IF EXISTS entries;
DROP TABLE IF EXISTS transfers;
DROP TABLE IF EXISTS accounts;
When I run
migrate -path db/migrations --database "postgresql://root:secret#localhost:5432/accountsdb?sslmode=disable" -verbose down
▶ make migratedown
migrate -path db/migrations --database "postgresql://root:secret#localhost:5432/accountsdb?sslmode=disable" -verbose down
2022/02/12 21:43:19 Are you sure you want to apply all down migrations? [y/N]
y
2022/02/12 21:43:21 Applying all down migrations
2022/02/12 21:43:21 no change
2022/02/12 21:43:21 Finished after 2.242449209s
2022/02/12 21:43:21 Closing source and database
nothing changes.
why is that?
My corresponding up script works as expected.

Flyway migration error with DB2 11.1 SP including pure xml DDL

I have a fairly complex Db2 V11.1 SP that will compile and deploy manually, but when i add the SQL to a migration script I get this issue
https://github.com/flyway/flyway/issues/2795
As the SP compiles and deploys manually, I am confident the SP SQL is ok.
Does anyone have any idea what the underlying issue might be
DB2 11.1
Flyway 6.4.1 (I have tried 7.x versions with same result)
the SP uses pure xml functions, so the SP SQL includes $ and # characters
I tried using obscure statement terminator chars ( ~ ^) but a simple test with Pure xml functions and # as statement terminator seemed to work
--#SET TERMINATOR #
SET SCHEMA CORE
#
CREATE OR REPLACE PROCEDURE CORE.XML_QUERY
LANGUAGE SQL
BEGIN
DECLARE GLOBAL TEMPORARY TABLE OPTIONAL_ELEMENT (
LEG_SEG_ID BIGINT,
OPTIONAL_ELEMENT_NUM INTEGER,
OPTIONAL_ELEMENT_LIST VARCHAR(100),
CLSEQ INTEGER
) ON COMMIT PRESERVE ROWS NOT LOGGED WITH REPLACE;
insert into session.optional_element
select distinct LEG_SEG_ID, A.OPTIONAL_ELEMENT_NUM, A.OPTIONAL_ELEMENT_LIST, A.CLSEQ
from core.leg_seg , XMLTABLE('$d/LO/O' passing XMLPARSE(DOCUMENT(optional_element_xml)) as "d"
COLUMNS
OPTIONAL_ELEMENT_NUM INTEGER PATH '#Num',
OPTIONAL_ELEMENT_LIST VARCHAR(100) PATH 'text()',
CLSEQ INTEGER PATH '#Seq') AS A
WHERE iv_id = 6497222690 and optional_element_xml is not null;
END
#

Updating database table & column

Using MVC4 and EF Code First
I renamed my table/column-name i.e. table: from Categories to Category in the model and in the code and when I run the migration statement
Update-Database -Verbose -Force
I get an error:
The CREATE UNIQUE INDEX statement terminated because a duplicate key was found for the object name 'dbo.Categories' and the index name 'PK_dbo.Categories'. The duplicate key value is (). Could not create constraint. See previous errors. The statement has been terminated.
In the configuration file I have AutomaticMigrationsEnabled = true;
Is there something else I need to do to make the changes apply in the database?
You cannot modify the PK.
1 - Delete existing database then run Update-Database
2 - Generate update script by running Update-Database -Script and add drop index TSQL to generated file and run the script.
3 - Remove key constrain from table via Visual Studio or Management Studio then remove TSQL for dropping index in generated script then run script.

How to log an error into a Table from SQLPLus

I am very new to Oracle so please base with me if this is covered else where.
I have a MS SQL box running Jobs calling batch files running scripts in SQLPLUS to ETL to an Oracle 10G database.
I have an intermittent issue with a script that is causing the ETL to fail which at the minute without error logging is something of an unknown. The current solution highlights the load failure based on rowcounts for before and after the script has finsihed.
I'd like to be able to insert any errors encoutered whilst running the offending script into an error log table on the same database receiving the data loads.
There's nothing too technical about the script, at a high level is performs the following steps all via SQL code and no procedural calls.
Updates a table with Date and current row counts
Pulls data from a remote source into a staging table
Merges the Staging table into an intermediate staging table
Performs some transformational actions
Merges the intermediate staging table into the final Fact table
Updates a table with new row counts
Please advise whether it is possible to pass error messages, codes, line number etc etc via SQLPLUS into a Database table? and if so the easiest method to achieve this.
A first few lines of the script are shown below to give a flavour
/*set echo off*/
set heading off
set feedback off
set sqlblanklines on
/* ID 1 BATCH START TIME */
INSERT INTO CITSDMI.CITSD_TIMETABLE_ORDERLINE TGT
(TGT.BATCH_START_TIME)
(SELECT SYSDATE FROM DUAL);
COMMIT;
insert into CITSDMI.CITSD_TIMETABLE_ALL_LOADS
(LOAD_NAME, LOAD_CRITICALITY,LOAD_TYPE,BATCH_START_TIME)
values
('ORDERLINE','HIGH','SMART',(SELECT SYSDATE FROM DUAL));
commit;
/* Clear the Staging Tables */
TRUNCATE TABLE STAGE_SMART_ORDERLINE;
Commit;
TRUNCATE TABLE TRANSF_FACT_ORDERLINE;
Commit;
and so it goes on with the rest of the steps.
Any assistant will be greatly appreciated.
Whilst not fully understanding your requirement, a couple of pointers.
The WHENEVER command will help you control what sqlplus should do when an error occurs, e.g.
WHENEVER SQLERROR EXIT FAILURE ROLLBACK
WHENEVER OSERROR EXIT FAILURE ROLLBACK
INSERT ...
INSERT ...
This will cause sqlplus to exit with error status 1 if any of the following statements fail.
You can also have WHENEVER SQLERROR CONTINUE ...
Since the WHENEVER ... EXIT FAILURE/SUCCESS controls the exit status, the calling script/program will know if it worked failed.
Logging
use SPOOL to spool the out to a file.
Logging to table.
Best way is to wrap your statements into PLSQL anonymous blocks and use exception hanlders to log errors.
So, putting the above together, using a UNIX shell as the invoker:
sqlplus -S /nolog <<EOF
WHENEVER SQLERROR EXIT FAILURE ROLLBACK
CONNECT ${USRPWD}
SPOOL ${SPLFILE}
BEGIN
INSERT INTO the_table ( c1, c1 ) VALUES ( '${V1}', '${V2}' );
EXCEPTION
WHEN OTHERS THEN
INSERT INTO the_error_tab ( name, errno, errm ) VALUES ( 'the_script', SQLCODE, SQLERRM );
COMMIT;
END;
/
SPOOL OFF
QUIT
EOF
if [ ${?} -eq 0 ]
then
echo "Success!"
else
echo "Oh dear!! See ${SPLFILE}"
fi