Migrating from SQL Server to Aurora PostgreSQL where encountering GUID, VARCHAR, UUID issues - postgresql

I'm seeking some advice.
I've migrated a database from SQL Server to Aurora PostgreSQL using AWS DMS. In most of the tables in SQL Server, the primary keys are a uniqueidentifier (GUID). When migrated to Postgres these columns are converted to VARCHAR(36). This seems to be as expected, per the AWS DMS documentation.
In our .NET application, we use Entity Framework 6, which I have added a new dbContext to use the npgsql provider. Note that we are still keeping existing SQL Server EF6 providers. Essentially, the application will use both SQL Server and PostgreSQL. This is all hooked up fine.
Where I run into some issues is when my Postgres context is making fetches to the PostgreSQL database, it encounters a lot of errors
Npgsql.PostgresException: 42883: operator does not exist: character varying = uuid
I understand the issue, where the application using EF makes a fetch by Id (GUID), and the Postgres table has an Id that is VARCHAR type...
My feeling is the problem is not on the application or EF side, rather the column on the table should be something like a UUID. Which I can do, on post migration, I can simply alter the column to become a UUID type, but is this the way, and will it resolve my issues? I also feel like this can't be a unique case I'm dealing with; seems like a common issue for anyone also migrating a .NET app from SQL Server to PostgreSQL...
I look forward to hearing some of your ideas, comments, thoughts on this. Thanks in advance.

It seems that this migration procedure is not quite up to the task, as a GUID (which is Microsoft's confusing term for UUID) should be migrated to uuid. Not only would you save 21 bytes of storage space per row, but you also wouldn't have this problem.
It seems that your application is comparing a uuid value with one of the migrated varchars:
WHERE uniqueidentifier = UUID '87595807-3157-4a81-ac89-3e09e83c0c0a'
You have to add an explicit cast, like the error message says:
WHERE uniqueidentifier = CAST (UUID '87595807-3157-4a81-ac89-3e09e83c0c0a' AS text)
You would cast to text, not to varchar, because there is no equality operator for varchar. varchar is coerced to text when you compare it, because the storage for these types is identical.

Related

What is the best way to store a Twitter Snowflake string in a PostgresQL database?

https://developer.twitter.com/en/docs/twitter-ids I would like to store such Snowflake ID's in my PostgresQL database, what would be the accurate constraint and datatype for this?
At first I thought "id" VARCHAR(19) NOT NULL, but then I started wondering if there is something more accruate
From the page you linked in the question it says that Twitter ids are 64 bit integers. So you can use bigint for the column type in postgres.

Informix Stored Procedures with Entity Framework Issue

I am trying to import stored procedure into the entity model using database first approach and import fails due to the following warning. I am using Visual Studio 2015 update3
Error 6005: The funtion 'ar_get_contact_name' has a return data type 'varchar' that is currently not supported for the target Entity Framework version. The function was excluded.
Error 6046: Unable to generate function import return type of the store function 'ar_get_contact_name'. The function will be ignored and the function import will not be generated.
Table and SP as follows
create table "entityframework".ar_contact
(
contact_code char(10) not null primary key,
name char(80) not null
);
CREATE PROCEDURE 'entityframework'.ar_get_contact_name (
cont_code LIKE ar_contact.contact_code)
RETURNING
VARCHAR(50);
DEFINE cont_name VARCHAR(255);
SELECT
ar_contact.name
INTO
cont_name
FROM
ar_contact
WHERE
cont_code = contact_code;
RETURN cont_name;
END PROCEDURE
Is there any workaround for this?
Truly Informix do not have a good story with Entity Framework (EF) Support yet.
Right now for .net application to use EF functionality they have to use DB2 driver connection to Informix using DRDA protocol.
While using the DB2 EF driver connecting to Informix database, many of the functionality work ok (but not all).
Certain functionalities are different at database level for DB2 and Informix.
The return value of store procedures and functions are one of such difference; very likely you may be getting into this difference.

A recent change in Postgresql or Ecto regarding the ID column of a table

I'm using the latest version of:
% psql --version
psql (PostgreSQL) 9.6.5
So far my Phoenix application has worked fine. The last time I reset my db was about 2-3 weeks ago.
Now after I've reset it, my custom psql function has started throwing an exception related "integer vs bigint for ID/primary key column".
DETAIL: Returned type bigint does not match expected type integer in column 1.
But it's always been integer in my app with no problem.
The thing is that I've not changed anything in the migrations related to ID columns.
Has there been any breaking changes in Ecto or Postgresql related to ID/primary key datatypes?.
P.S.
In all my old phoenix applications all ID columns are integers -- it's how they were generated by Ecto or Phoenix. I've not reset a db in these apps. However, in this app they're now generated bigint. Why? Where can I read about this?
Ecto changed default type for ids to be bigint from integer. Here's the issue and PR on the Ecto Github repo, if you want to see the code and read about it:
https://github.com/elixir-ecto/ecto/issues/1879
For general discussion you might want to create a topic at elixirforum.com.

Crate DB exception, no viable alternative at input 'VARCHAR'

I'm using Elastic search to store large amount of data to make it searchable, but for configuration items I'm still using HSQL DB.
Is it possible to eliminate HSQL DB completely and use my existing Elastic search in combination with Crate DB?
Things I have tried:
tried connecting to my existing Elastic search using Crate driver and Crate client but I got an exception No handler found for action "crate_sql". Does that mean I cannot use my existing ES and have to use inbuilt ES in crateDB??
After connecting to crateDB elastic search (and not my existing ES). I was able to get connection using CrateDriver and run SQL queries. But in one of module I'm creating table using below command:
create table some_table_name
(
id VARCHAR(256),
userName VARCHAR(256),
fieldName VARCHAR(256),
primary key (id),
unique (userName, fieldName)
);
...but then I got an exception:
io.crate.action.sql.SQLActionException: line 1:28: no viable alternative at input 'VARCHAR'
Does that mean I cannot write create table query using SQL syntax and SQL data types?
I know it will work if I use string data type instead of varchar, but I don't want to change all those queries now.
1)
No you cannot use existing ES nodes together with Crate. The whole SQL analyzer/planner/execution layer is done server side, not client side. In fact the crate clients are rather dumb.
2)
You'll have to change the types and also remove / change anything that isn't supported by crate. For example defaults or unique constraints aren't supported (up to 0.39 - in the future support might be added)
In your case the varchar type isn't valid in Crate, instead you'll have to use "string".
See Data Types Documentation for a list of supported data types.

Database replication from SQL Server 2000 to PostgreSQL

We are SQL Server users and recently we have one database on PostgreSQL. For consistency purpose we are replication database on SQL Server 2000 to other database on SQL Server 2000 and now we would also need to replicate it to the database on PostgreSQL. We were able to do that using ODBC and Linked Server. We created an ODBC DSN for database on PostgreSQL and using that DSN we created a Linked Server on SQL Server. We were able to replicate tables from SQL Server database to that linked server and hence to PostgreSQL database successfully. Now the issue faced is while replication, the datatype bit, numeric(12,2) and decimal(12,2) are converted to character(1), character(40) and character(40) respectively. Is there any solution on how to retain those data types in PostgreSQL database ? I mean the bit should become boolean, and numeric and decimal data type should remain as it is in the replicated table of postgresql. We are using PostgreSQL 9.x
SQL Server table,
CREATE TABLE tmtbl
(
id int IDENTITY (1, 1) NOT NULL PRIMARY KEY,
Code varchar(15),
booleancol bit,
numericcol numeric(10, 2),
decimalcol decimal(10, 2)
)
after being replicated to PostgreSQL it becomes,
CREATE TABLE tmtbl
(
id integer,
"Code" character varying(15),
booleancol character(1),
numericcol character(40),
decimalcol character(40),
)
Thank you very much.
Please, use:
boolean type for true/false type of columns (there's no bit type in postgres);
NUMERIC type exists also in the PostgreSQL (according to the SQL standard). But I suggest you should better use real PostgreSQL type, as it will be working faster.
I recommend you to create target table on the PostgreSQL side manually, specifying proper field types, as ODBC+Linked Server combination is not doing it's job properly.
You can always consult this part of the official documentation for existing data types.
have you heard of Foreign Data Wrappers?
http://wiki.postgresql.org/wiki/Foreign_data_wrappers