Flyway migration error with DB2 11.1 SP including pure xml DDL - db2

I have a fairly complex Db2 V11.1 SP that will compile and deploy manually, but when i add the SQL to a migration script I get this issue
https://github.com/flyway/flyway/issues/2795
As the SP compiles and deploys manually, I am confident the SP SQL is ok.
Does anyone have any idea what the underlying issue might be
DB2 11.1
Flyway 6.4.1 (I have tried 7.x versions with same result)
the SP uses pure xml functions, so the SP SQL includes $ and # characters
I tried using obscure statement terminator chars ( ~ ^) but a simple test with Pure xml functions and # as statement terminator seemed to work
--#SET TERMINATOR #
SET SCHEMA CORE
#
CREATE OR REPLACE PROCEDURE CORE.XML_QUERY
LANGUAGE SQL
BEGIN
DECLARE GLOBAL TEMPORARY TABLE OPTIONAL_ELEMENT (
LEG_SEG_ID BIGINT,
OPTIONAL_ELEMENT_NUM INTEGER,
OPTIONAL_ELEMENT_LIST VARCHAR(100),
CLSEQ INTEGER
) ON COMMIT PRESERVE ROWS NOT LOGGED WITH REPLACE;
insert into session.optional_element
select distinct LEG_SEG_ID, A.OPTIONAL_ELEMENT_NUM, A.OPTIONAL_ELEMENT_LIST, A.CLSEQ
from core.leg_seg , XMLTABLE('$d/LO/O' passing XMLPARSE(DOCUMENT(optional_element_xml)) as "d"
COLUMNS
OPTIONAL_ELEMENT_NUM INTEGER PATH '#Num',
OPTIONAL_ELEMENT_LIST VARCHAR(100) PATH 'text()',
CLSEQ INTEGER PATH '#Seq') AS A
WHERE iv_id = 6497222690 and optional_element_xml is not null;
END
#

Related

How to create a function in PostgreSQL like SQL Server BACKUP DATABASE TO DISK

I'm trying without success to create a function in Postgres that save a table or database taking one or two parameters. In this case I was trying to create it with only one parameter(name of the table or database) and backup this table/db
--SELECT backup_table(sports)
CREATE FUNCTION backup_table(TEXT) RETURNS BOOLEAN AS
$$
DECLARE
table_x ALIAS FOR $1;
BEGIN
COPY table_x FROM 'C:/path/backup_db' WITH (FORMAT CSV);
RAISE NOTICE 'Saved correctly the table %',$1;
RETURN BOOLEAN;
END;
$$ LANGUAGE plpgsql;
I've always receive the error when I try to execute the function SELECT backup_table(sports):
"The column sports doesnt exists."
SQL state: 42703
Character: 21
The idea is to create the function like the equivalent of SQL Server BACKUP DATABASE TO DISK, or equivalent to pg_dump command
pg_dump -U -W -F t sports > C:/path/backup_db;
I know about SQL but now I'm just stuck with this error.

How can I fill in a table from a file when using flyway migration scripts

I have scripts
/*The extension is used to generate UUID*/
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
-- auto-generated definition
create table users
(
id uuid not null DEFAULT uuid_generate_v4 ()
constraint profile_pkey
primary key,
em varchar(255),
user varchar(255)
);
In IDE Intellij Idea (a project with Spring Boot):
src/main/resources/db-migration
src/main/resources/sql_scripts :
copy.sql
user.txt
I'm just trying to run a simple Sql command for now to see that everything works clearly
copy.sql
COPY profile FROM '/sql_scripts/user.txt'
USING DELIMITERS ',' WITH NULL AS '\null';
user.txt
'm#mai.com', 'sara'
's#yandex.ru', 'jacobs'
But when I run the copy command, I get an error
ERROR: could not open file...
Maybe who knows how it should work and what needs to be fixed ?
Strong possibility its a pathing issue; could you try, instead of
COPY profile FROM '/sql_scripts/user.txt'
doing
COPY profile FROM './sql_scripts/user.txt'
(or an absolute path)

PostgreSql foreign table select fails due to special characters rows

I just set up a new foreign table and it works as intended if I just select the "ID" (integer) field.
When I add the "Description"(text) field and try to select the table, it fails with this error message:
utf-8 'Codec cannot decode byte 0xfc in position 10: invalid start byte
After checking the remote table, I found that "Description" contains special characters like: "ö, ü, ä"
What can i do to fix this?
Table definitions (Only first 2 rows)
Remote table:
CREATE TABLE test (
[Id] [char](8) NOT NULL,
[Description] [nvarchar](50) NOT NULL
)
Foreign table:
Create Foreign Table "Test" (
"Id" Char(8),
"Description" VarChar(50)
) Server "Remote" Options (
schema_name 'dbo', table_name 'test'
);
Additional information:
Foreign data wrapper: tds_fdw
Local server: Postgres 12, encoding: UTF8
Remote server: Sql Server, encoding: Latin1_General_CI_AS
As Laurenz Albe suggested in the comments, I created a freetds.conf in my PostgreSQL folder with the following content:
[global]
tds version = auto
client charset = UTF-8
Don't forget to set the path to the configuration file in the environment variable FREETDS.
Powershell:
[System.Environment]::SetEnvironmentVariable('FREETDS','C:\Program Files\PostgreSQL\12',[System.EnvironmentVariableTarget]::Machine)

Invoke-SqlCmd for migrations script fails on columns that will be introduced

I'm trying to get database migrations into my release pipeline definition, roughly following this aproach - dotnet ef migrations script --idempotent as part of the build pipeline, and then an Invoke-SqlCmd task pointing to the resulting script as part of the release pipeline.
However, I have made one change from that blog post, that might be of importance: in order to avoid corrupt states where half a migration is deployed, I wrap the entire script in a transaction, so that it is effectively
SET XACT_ABORT ON
BEGIN TRANSACTION
-- output of dotnet ef migrations script --idempotent
COMMIT
However, when I try to run the script as part of my release pipeline, in an Azure SQL Database task just as the one he uses in the blog post, it fails on e.g. references to things that don't exist in the schema yet, but will by the point the script reaches the part that includes the reference.
For example, considering the following migrations script:
SET XACT_ABORT ON
BEGIN TRANSACTION
-- in the actual script, each of these statements are, individually, wrapped in an IF that
-- checks whether the migration has been run before or not, similar to the first one in
-- this listing
IF NOT EXISTS (SELECT * FROM [__EFMigrationsHistory] WHERE MigrationId = 'AddColumnBar')
BEGIN
ALTER TABLE Foo ADD Bar int NULL
END
GO
UPDATE Foo SET Bar = 3
GO
ALTER TABLE Foo ALTER COLUMN Bar int NOT NULL
GO
COMMIT
Executing this script with Invoke-SqlCmd as suggested in the blog post yields an error stating that the Foo column does not exist.
How do I tell Invoke-SqlCmd that I know it doesn't, but it will when it needs to? If that's not possible, what's the best approach to fix this deployment flow for Azure DevOps?
Additional info:
Here's the complete log from the SQL step in the release pipeline:
2019-01-09T14:57:52.7983184Z ##[section]Starting: Apply EF Migrations
2019-01-09T14:57:52.7989024Z ==============================================================================
2019-01-09T14:57:52.7989311Z Task : Azure SQL Database Deployment
2019-01-09T14:57:52.7989427Z Description : Deploy Azure SQL DB using DACPAC or run scripts using SQLCMD
2019-01-09T14:57:52.7989514Z Version : 1.2.9
2019-01-09T14:57:52.7989608Z Author : Microsoft Corporation
2019-01-09T14:57:52.7989703Z Help : [More Information](https://aka.ms/sqlazuredeployreadme)
2019-01-09T14:57:52.7989823Z ==============================================================================
2019-01-09T14:57:58.8012013Z Sql file: D:\a\r1\a\_db-migrations\migrations-with-transaction.sql
2019-01-09T14:57:58.8189093Z Invoke-Sqlcmd -ServerInstance "***" -Database "***" -Username "***" -Password ****** -Inputfile "D:\a\r1\a\db-migrations\migrations-with-transaction.sql" -ConnectionTimeout 120
2019-01-09T14:58:04.3140758Z ##[error]Invalid column name 'Group_DocumentTagGroupId'.Check out how to troubleshoot failures at https://aka.ms/sqlazuredeployreadme#troubleshooting-
2019-01-09T14:58:04.3480044Z ##[section]Finishing: Apply EF Migrations
Here are all the references to Group_DocumentTagGroupId in the script:
-- starting around line 1740
IF NOT EXISTS(SELECT * FROM [__EFMigrationsHistory] WHERE [MigrationId] = N'20181026122735_AddEntityForDocumentTagGroup')
BEGIN
ALTER TABLE [DocumentTags] ADD [Group_DocumentTagGroupId] bigint NOT NULL DEFAULT 0;
END;
GO
-- about 20 lines with no mention of the column (but several GO statements)
IF NOT EXISTS(SELECT * FROM [__EFMigrationsHistory] WHERE [MigrationId] = N'20181026122735_AddEntityForDocumentTagGroup')
BEGIN
EXEC('
UPDATE [dbo].[DocumentTags] SET [Group_DocumentTagGroupId] = (SELECT TOP 1 [DocumentTagGroupId] FROM [dbo].[DocumentTagGroups] g WHERE g.[Name] = [Type])')
END;
GO
IF NOT EXISTS(SELECT * FROM [__EFMigrationsHistory] WHERE [MigrationId] = N'20181026122735_AddEntityForDocumentTagGroup')
BEGIN
CREATE INDEX [IX_DocumentTags_Group_DocumentTagGroupId] ON [DocumentTags] ([Group_DocumentTagGroupId]);
END;
GO
IF NOT EXISTS(SELECT * FROM [__EFMigrationsHistory] WHERE [MigrationId] = N'20181026122735_AddEntityForDocumentTagGroup')
BEGIN
ALTER TABLE [DocumentTags] ADD CONSTRAINT [FK_DocumentTags_DocumentTagGroups_Group_DocumentTagGroupId] FOREIGN KEY ([Group_DocumentTagGroupId]) REFERENCES [DocumentTagGroups] ([DocumentTagGroupId]) ON DELETE CASCADE;
END;
GO
The SQL file that is sent to Invoke-SqlCmd is generated with the following PowerShell script
$migrationsWithoutTransaction = "./Migrations/scripts/migrations-without-transaction.sql"
dotnet ef migrations script --configuration Release --idempotent --output $migrationsWithoutTransaction
$migrationsWithTransaction = "./Migrations/scripts/migrations-with-transaction.sql"
Get-Content "./Migrations/begin-transaction.sql" | Out-File -Encoding Utf8 $migrationsWithTransaction
Get-Content $migrationsWithoutTransaction | Out-File -Encoding Utf8 -Append $migrationsWithTransaction
Get-Content "./Migrations/commit-transaction.sql" | Out-File -Encoding Utf8 -Append $migrationsWithTransaction
where this is the contents of the auxiliary sql scripts:
-- Migrations/begin-transaction.sql
SET XACT_ABORT ON
BEGIN TRANSACTION
-- Migrations/end-transaction.sql
COMMIT
Is using Azure Release pipeline (classic) an option for you? Because I am successfully running EF migrations on it, while creating the migrations script on the fly.
See my answer here: how to execute sql script using azure devops pipeline

PostgreSQL (shp2pgsql) AddGeometryColumn gives "No function matches the given name"

I'm working with the PADUS OBI shape file, not that that's probably important.
I'm running the shape file through shp2pgsql using the default options, as in:
shp2pgsql PADUS_1_1_CBI_Edition.shp > PADUS.sql
Then I'm trying to import the SQL into Postgres by doing:
psql -d padusdb -f PADUS.sql
And getting the following error:
psql:PADUS.sql:36: ERROR: function addgeometrycolumn(unknown, unknown, unknown, unknown, unknown, integer) does not exist
LINE 1: SELECT AddGeometryColumn('','padus_1_1_cbi_edition','the_geo...
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
I have PostGIS installed.
The SQL commands leading to the error (being put into an otherwise empty database) are:
SET CLIENT_ENCODING TO UTF8;
SET STANDARD_CONFORMING_STRINGS TO ON;
BEGIN;
CREATE TABLE "padus_1_1_cbi_edition" (gid serial PRIMARY KEY,
"us_id" int4,
"category" varchar(10),
"gis_acres" numeric,
...
BUNCH OF COLUMNS, none of which is called "the_geom"
...
"comments" varchar(200),
"shape_leng" numeric,
"shape_area" numeric);
SELECT AddGeometryColumn('','padus_1_1_cbi_edition','the_geom','-1','MULTIPOLYGON',2);
COMMIT;
Any thoughts on what this might mean and how to resolve the problem?
So, as it turns out, it is not enough to simply have installed PostGIS on one's machine.
Originally, I'd chosen sudo apt-get install postgresql postgis on Ubuntu 10.10. This left me with a working version of PostGRE 8.4, but no sign of PostGIS.
Therefore, I tried sudo apt-get install postgresql-8.4-postgis.
But one's work doesn't end there! You need to set up the PostGIS database.
This website provides instructions on doing this and using the database afterwards.
It also sounds like the database needs to be spatially enabled. The reason it's throwing that errors is because the function is missing. This resource has a quick and easy answer and solution.
this error indicates that the function cannot be recognized (either function name or parameters types are incorrect)
this is the definitions for AddGeometryColumn in v7.2
text AddGeometryColumn(varchar table_name, varchar column_name, integer srid, varchar type, integer dimension);
text AddGeometryColumn(varchar schema_name, varchar table_name, varchar column_name, integer srid, varchar type, integer dimension);
text AddGeometryColumn(varchar catalog_name, varchar schema_name, varchar table_name, varchar column_name, integer srid, varchar type, integer dimension);
it looks to me like you're trying to use the 2nd definition, try changing it to use the first definition (no schema) and try unquote the srid (-1) since it should be passed as an integer.
You may need to cast everything...
Thanks atorres757! Your answer solved my problem in minutes. I deleted my database and created a new database and choose the template_postgis as my template. All shapefiles are importing fine with my python script like this:
for lyr in iList:
os.system("shp2pgsql -c -s 4326 -k -I -W UTF-8 "+lyr[:-4]+" "+lyr[:-4]+" | psql -d AWM -p 5432 -U postgres")