PostgreSQL - How to import custom data type when creating foreign table (using postgres-fdw)? - postgresql

I'm trying to create foreign table view using postgresql_fdw (https://www.postgresql.org/docs/current/postgres-fdw.html).
When trying to IMPORT FOREIGN SCHEMA public FROM SERVER replica_db1 INTO db1, it reports
type "public.custom_type" does not exist
same as in https://www.postgresql.org/docs/current/postgres-fdw.html
I want to know, how can I automatically copy custom data type into target db?
Thanks!

The documentation tells you:
If the remote tables to be imported have columns of user-defined data types, the local server must have compatible types of the same names.
So make sure that the local database has a type of the same name, and it had better be similar too (at least have the same text representation).
If you want functions and operators on that type to be pushed down, you'll have to put them into an extension that you install in both databases.
Then specify that extension in the extension option of the foreign server.

Related

Why does DB2 put my UserId from the connection string as my table name?

I'm trying to us Entity Framework Core to query my DB2 database. Here's how I register it:
services.AddDbContext<DB2Contexte>(options =>
options.UseDb2(Configuration.GetConnectionString("DB2"),
builder =>
{
builder.SetServerInfo(IBMDBServerType.AS400, IBMDBServerVersion.AS400_07_01);
builder.UseRowNumberForPaging();
builder.MaxBatchSize(1);
}));
That's the class the is use as a DbSet:
And there's my connection string:
Server=something;UserID=U_SERVTI;Password=something;Database=something; LibraryList=something;CurrentFunctionPath=*LIBL
Than when I try to query the database using simple LINQ:
_dbContext.PersonneRessource.FirstOrDefault()
I get this error:
ERROR [42704] [IBM][AS] SQL0204N "U_SERVTI.DX37PERE" is an undefined name.
Why is the UserId in the name? Shouldn't it just query the table and leave out the UserID?
I use IBM.EntityFrameworkCore-lnx version 3.1.0.500.
Db2 for IBM i, for historical reasons, supports two naming conventions; SYS and SQL.
By default, external connections will use SQL naming and like the rest of the Db2 family, unqualified table references will be implicitly qualified with the "run-time authorization identifier"; normally the user id used to connect.
With SYS naming, unqualified table references are qualified with *LIBL and the library list is used to find the table.
On your connection string, you're going to want to add Naming=SYS (or maybe Naming=*SYS )
Note that SYS vs. SQL naming affects just about every unqualified reference. Be sure to look at the documentation.

Apache Ignite generated key for cluster but no key class

I used Ignite Web Console to generate a cluster configuration for an existing database. One of the tables in question has no key--it consists of two columns, both integers, neither of which is a key. There is a foreign key constraint that one of the columns must exist in another table, but I don't especially care about that.
In the generated cluster xml, each of the two columns is represented as a value field. These two fields match up with the generated POJO class as well. However, in the "keyType" field of the cluster config, it references a generated key class that, as far as I can tell, does not exist. If the POJO class for the table is Foo, then the key class is written down as FooKey, but this class does not exist in the project, and there is no definition for what fields would be in the key.
What am I supposed to do when referencing this cache? Do I need to create an implementation of this key class myself? When I make calls to the cache, does it need to be in the Entry format? How does the key-value store work when there is no key in the original table?
I think you’ll need to add these fields manually to "keyType". In order to do this find a model in Advanced -> SQL Scheme, then select two columns in "Key fields" dropdown menu. This will generate the FooKey.

Alembic not generating correct changes

I am using Flask-Migrate==2.0.0. Its not detecting the changes correctly. Every time I run python manage db migrate it generates a script for all models although they have been added successfully in previous revisions. I have added two new columns to a table, migration revision is supposed to have only those two new columns instead all tables are added to it. Is there anything I am missing?
EDIT 1
Here is whats happening.
I added Flask_Migrate to my project.
python manage db init
python manage db migrate
python manage db upgrade
Flask-Migrate generated tables for models plus alembic_version table with having revision
985efbf37786
After this I made some changes. I added two new columns in one of my table and run the command again
python manage db migrate
It generated new revision
934ba2ddbd44
but instead of adding just only those two new columns, the revision contains script for all tables plus those two new columns. So for instance in my first revision, I have something like this
op.create_table('forex_costs',
sa.Column('code', sa.String(), nullable=False),
sa.Column('country', sa.String(), nullable=False),
sa.Column('rate', sa.Numeric(), nullable=False),
sa.PrimaryKeyConstraint('code', 'country', name='forex_costs_id'),
schema='regis'
)
The second revision also contains exactly the same code. I don't understand why if its already generated.
I googled it a little and my problems looks exactly like this https://github.com/miguelgrinberg/Flask-Migrate/issues/93 but I am not using oracle DB. I am using Postgresql. Also I don't know if it has any effect but I am not creating my tables in Default Public Schema, instead I am creating two new schemas (schema_a and schema_b) as I have a lot of tables(Around 100). So just to arrange them.
EDIT 2
The first problem seems to have resolved by adding
include_schemas=True
in env.py.
Now the new migration is not trying to create already existing tables again but it has some issues with foreign keys. Every time I create a new revision, it tries to remove the already existing foreign keys and then tries to add them. Logs looks like this
INFO [alembic.autogenerate.compare] Detected removed foreign key (post_id)(post_id) on table album_photos
INFO [alembic.autogenerate.compare] Detected removed foreign key (album_id)(album_id) on table album_photos
INFO [alembic.autogenerate.compare] Detected removed foreign key (user_id)(user_id) on table album_photos
INFO [alembic.autogenerate.compare] Detected added foreign key (album_id)(album_id) on table prodcat.album_photos
INFO [alembic.autogenerate.compare] Detected added foreign key (post_id)(post_id) on table prodcat.album_photos
INFO [alembic.autogenerate.compare] Detected added foreign key (user_id)(user_id) on table prodcat.album_photos
I have tried adding name to each Foreign Key constraint but that doesn't have any effect.
Thanks for coming back and providing your feedback after you solved the issue. I had grief with the same issue for 2 hours while using postgres
Btw, I would like to point out that you would have to include the include_schemas option in the block context.configure, like so:
context.configure(connection=connection,
target_metadata=target_metadata,
include_schemas=True,
process_revision_directives=process_revision_directives,
**current_app.extensions['migrate'].configure_args)
Setting search_path to public fixed this issue. I always thought that in addition to setting schema info explicitly on each model, we also need to add those schemas on search_path. I was wrong. Changing postgresql search_path is not necessary once schemas are defined explicitly on each model.
The search path means that reflected foreign key definitions will not
match what you have in your model. This only applies to foreign keys
because that's how Postgresql does it. Read through
http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#remote-schema-table-introspection-and-postgresql-search-path
for background. - Michael Bayer

Entity Framework 4.1 Complex Type reuse in different models

Here is the senario for which I could not find anything useful. Maybe Im the first person thinking of doing it this way:
Approach: Database First
Database: SQL Server 2008 R2
Project : DLL (Data Access)
I have a data access library which encapsulates all the access to database as well as the biz functionality. The database has many tables and all the tables have the following 2 columns:
last_updated_on: smalldatetime
last_updated_by: nvarchar(50)
The project contains several models (or edmx files) which contain only related entities which are mapped to the tables they represent. Since each of the table contain the columns for last_updated_* I created a complex type in one of the models that is as follows:
Complex Type: History
By (string: last_updated_by)
On (DateTime: last_updated_on)
The problem is that it can only be used in the model in which I defined it.
A) If I try to use it in other model it does not show it in the designer
B) If i define it in the other models I get error History already defined
Is there any solution so that the History complex type, defined in one model can be reused by other models?
I was trying to do almost the exact same thing (my DB fields are "created", "creatorId", "modified", and "modifierId", wrapped up into a complex type RecordHistory) and ran into this question before finding an answer...
http://msdn.microsoft.com/en-us/data/jj680147#Mapping outlines the solution, but since it's fairly simple I'll cover the basics here too:
First create the complex type as you did (select the fields in the .edmx Designer, right click, select "Refactor into New Complex Type")
In the .edmx Designer (NOT the Model Browser) right click on another table/entity that has the same common columns and select Add -> Complex Property
The new property will be assigned a complex type automatically. If you have more than 1 complex type, edit the new property's properties and set the Type appropriately
Right-click on the table/entity again (in either the Model Browser or Designer) and select Table Mapping
Update the Value/Property column for each of your common fields (changing them, in my case, from "modified : DateTime" to "history.modified : DateTime")
Delete the old common fields from your entity, leaving just the complex type in their place

Core data: get table and column information from object model

Is it possible to programmatically find out how core data matches a given class property to the database, i.e. in which table and which column the information will be stored?
Background: I would like to place an index on a specific column. I can find out the column by looking at the SQL core data executes. But there should be a more generic way to place the index, than hard-coding.
No, SQLite schema is a (private) implementation detail of CoreData. You can enable indexing of a property in the model editor in Xcode.