`Cannot drop table "globalLinks" because other objects depend on it` despite no models or objects depending on it. Sequelize and Postgres - postgresql

This error is preventing me from running sequelize db:migrate:undo:all.
The globalLinks table is a table I created in my second migration.
There are associations for this table created in a third migration.
There are no associations in any of the remaining models.
Are the objects mentioned in this error log referring to columns? tables? cells?
I know db:migrate:undo:all would undo each migration in reverse order starting from the most recent, so what would remain by the time I try to drop this table?
If it is any clue, I am undoing all of these migrations because the same table is giving me an issue when I try to add a column - I get the error: ERROR: column "userId" of relation "globalLinks" already exists
What's up with this table?

Related

Ambiguous column reference "ctid" in SELECT with more than one table

I'm using CRecordset to query one table, but I use a second table to filter data. If in my GetDefaultSQL override method I return a table list with more than one table then I get this ERROR: column reference "ctid" is ambiguous. I know what a "ctid" column is, but I don't use it in my code. It's inserted into the original SQL statement by ODBC driver. How to fix this? How to tell the ODBC driver not to insert the "ctid" column?
I tried to call CRecordset::Open with readOnly parameter, as I assume that ODBC needs ctid to update the row, and I don't need to update them. But the error remains.
Also tried to add a primary key to the second table that was missing it, thinking if a table has a primary key then ODBC can use that instead of 'ctid', but again no luck. Makes sense though, because I don't fetch any column of that second table, and the second table is used just for filtering.
If I make a DB view to work around the issue, I get ERROR: column "ctid" does not exist.
You have to call CRecordset::Open with two parameters changed:
m_pSet->Open(CRecordset::snapshot, NULL, CRecordset::readOnly);
Then you can fetch both the joined tables and the view without errors. No "ctid" then.

When I add a column in the database, under what conditions do I need to update my EDMX?

When I add a column in the database, under what conditions do I need to update my EDMX?
To elaborate:
I know if I add a non-nullable field, I need to update the model if I want to write to the database. What if just I want to read?
What if it's a nullable field? Can I both read and write?
What if I were to change the primary key to the new column but the edmx still has the old column as primary?
1) If you want to port an old database, you need to make sure that every table in your database must have a primary key. This is the only requirement for creating the EDMX.
2) If you've added a column in a table at database side, and have not updated edmx, then you'll simply not be able to use that column though EntityFramework.
If you create a non nullable column with no default value, the insert operation will fail with exception "Cannot insert null into column , statement terminated". And the you'll not be able to read values of that column using entityframeowrk, unless you update the edmx.
3) If you've changed the primary key of any table at database side, and if the edmx is not aware of that, your application might create a runtime exception when performing operations with that table.
Remember, Entity Framework creates SQL queries depending upon its knowledge of database(which is defined in EDMX). So if EDMX is incorrect, the resulting SQL queries so generated might lead to problems at runtime.

schema update with doctrine2 postgresql always DROPs and then ADDs CONSTRAINTs

When updating schema, doctrine always drops and add constraints. I think, it is something wrong...
php app/console doctrine:schema:update --force
Updating database schema...
Database schema updated successfully! "112" queries were executed
php app/console doctrine:schema:update --dump-sql
ALTER TABLE table.managers DROP CONSTRAINT FK_677E81B7A76ED395;
ALTER TABLE table.managers ADD CONSTRAINT FK_677E81B7A76ED395 FOREIGN KEY (user_id) REFERENCES table."user" (id) NOT DEFERRABLE INITIALLY IMMEDIATE;
...
php app/console doctrine:schema:validate
[Mapping] OK - The mapping files are correct.
[Database] FAIL - The database schema is not in sync with the current mapping file.
How can this may be fixed?
After some digging into doctrine update schema methods, I've finally found an issue. The problem was with table names - "table.order" and "table.user". When doctrine makes diff, this names become non equal, because of internal escaping (?). So, "user" != user, and foreign keys to those tables (order, user) always recreating.
Solution #1 - just rename tables to avoid name matching with postgresql keywords, like my_user, my_order.
Solution #2 - manually escape table names. This not worked for me, tried many different escaping ways.
I've applied solution #1 and now I see:
Nothing to update - your database is already in sync with the current
entity metadata
I have had the same issue on Postgres with a uniqueConstraint with a where clause.
* #ORM\Table(name="avatar",
* uniqueConstraints={
* #ORM\UniqueConstraint(name="const_name", columns={"id_user", "status"}, options={"where": "(status = 'pending')"})
Doctrine is comparing schemas from metadata and new generated schema, during indexes comparation where clauses are not matching.
string(34) "((status)::text = 'pending'::text)"
string(20) "(status = 'pending')"
You just have to change you where clause to match by
((avatar)::text = 'pending'::text)
PS: My issue was with Postgres database
I hope this will help someone.
I have come across this several times and it's because the php object was changed a few times and still does not match the mapping to the database. Basically a reboot will fix the issue but it can be ugly to implement.
If you drop the constraint in the database and php objects (remove the doctrine2 mappings), then update schema and confirm that nothing needs to be updated. Then add the doctrine mapping back to the php, update the schema using --dump-sql and review the changes that are shown. Confirm that this is exactly what you want and execute the update query. Now updating the schema should not show that anything else needs to be updated.

In DB2, are materialized query tables dropped if one of its source tables is dropped?

For example, I have a table GAME and PRICE, then I have an MQT called FPS_PRICE that is created using the following statement:
SELECT A.GAMENAME, B.GAMEPRICE
FROM GAME A, PRICE B
WHERE A.GAMEID=B.GAMEID
AND A.GAMETYPE='FPS';
If either the table GAME or PRICE gets dropped... does the MQT FPS_PRICE get dropped as well?
(I would test it out for myself,
but I don't have administrator access for the database in question)
Thanks!
Straight from Info Center:
All indexes, primary keys, foreign keys, check constraints,
materialized query tables, and staging tables referencing the table
are dropped. All views and triggers that reference the table are made
inoperative. (This includes both the table referenced in the ON clause
of the CREATE TRIGGER statement, and all tables referenced within the
triggered SQL statements.) All packages depending on any object
dropped or marked inoperative will be invalidated. This includes
packages dependent on any supertables above the subtable in the
hierarchy. Any reference columns for which the dropped table is
defined as the scope of the reference become unscoped.
The way to prevent it being dropped is to make it with a simple CREATE TABLE, rather than making it a materialized table.

MySQL Workbench... Populating the fk column

I'm brand new to MySQL Workbench and a have a bit of experience with databases (MS Access). I'm having trouble populating my fk with data. Here's what I have in my db schema:
2 tables Block and Set (Block having a pk Block_ID (type of INT); Set having fk to Block with fk name Set_Block_ID (type of INT).
1 to many relationship created from Block to Set tables linking Block_ID to Set_Block_ID. Relationship created, no problems
I populate the Block table with data. No problems
I then go to populate the Set table with data. I can see all my columns but not the fk. My question is why?
I have created the exact same db in MS Access and my fk is displayed in the linked table and I can populate it while MS Access makes sure referential integrity is enforced. I'm really brand new to Workbench and cant figure out why I cant see and populate my fk column.
Any help is appreciated!
Thanks!! =)
After having digested all the replies to my question (note sarcasm here) I have finally found a workaround way of solving the issue. To recap:
ISSUE:
created a simple 2 table relationship with Workbench with PK and FK (1 .. n relationship)
FK column not visible in Table Edit so not possible to enter any referencing data
SOLUTION:
installed SQLyog and connected to same server
opened same database and redid the simple 1 .. n relationship
FK column visible for editing in SQLyog
likewise, FK column visible for editing in Workbench
As I said, I'm new to this whole thing so I don't know what the problem was in Workbench. I just know it seems to be working fine now.
As you have noticed, the relationship drawing tool does not create actual foreign key constraints.
However, if double-click the referencing table and switch to the foreign-key tab, you can create references and specify the columns involved. This generates and maintains the visual linkage automatically: