MySQL Workbench: Deleting multiple tables from Physical Schemas - mysql-workbench

I'm looking for a smart way that deleting multiple tables from Physical Schemas view.
I know right-clicking a table and hitting Delete ... does the job, But it doesn't work well for thousands of tables. Selecting multiple tables and use right-clicking and hit Delete ... doesn't work as I expected. It deletes only one table.
How can I do that? I'm using MySQL Workbench 6.2.5.

That looks like an oversight or design error. You can only delete one table at a time. In order to have a better functionality implemented file a feature request on http://bugs.mysql.com.

Related

How to actually delete files in Iceberg

I know that in Apache Iceberg I can set limits on number and age of snapshots, and that "deleting" data from the table does not result in underlying data removal, it simply masks or deletes tracking information.
I would like to actually delete the underlying files on delete, however. I know this will make time-travel inconsistent, but it is still a business requirement.
https://iceberg.apache.org/docs/latest/configuration/
As best as I can tell, I'll have to track and manage the physical life-cycle every file independently. Am I missing something?
If you don't care about table history (or time travel) you can simply call the expire_snapshots procedure after each delete.
What you get is a common question for many iceberg users.
We often need an asynchronous task to delete and expire snapshots\data.
If you use spark, you can use https://iceberg.apache.org/docs/latest/spark-procedures/#expire_snapshots, as shay saied.
you can also do this using the java api provided by iceberg https://iceberg.apache.org/docs/latest/api/.
Starting a task for each table is difficult to manage. Tables often have different TTL. In this case, You can add custom configurations to a table. Manually scan all iceberg tables, then determines whether to delete expired snapshots and data based on these configurations.
If you are using Iceberg with Hive (4.0.0-alpha2 + version), you can try expire_snapshot command on beeline.
Like
ALTER TABLE test_table EXECUTE expire_snapshots('2021-12-09 05:39:18.689000000');
Can read:
https://docs.cloudera.com/cdw-runtime/cloud/iceberg-how-to/topics/iceberg-expiring-snapshots.html
Hive Jira adding support:
https://issues.apache.org/jira/browse/HIVE-26354

Two names, or permanent alias, for the same Postgres table, and column -- during migration

How can I create a permanent alias for a table (and also a column) such that queries against either name work?
I'd like to do this to enable renaming tables in our software. Our migrations need to run against live clusters, thus there's a period where the software version will be older than the DB version. It has to work with the old names of the tables and columns.
I see that it's possible to create a view with rules for insert, update, and delete, which I think is fairly close, but I'm wondering if there is a simpler approach. This approach also doesn't work if I wish to simply rename a column in a table (that is, without having to rename the table at the same time).

How to replicate rows into different tables of different database in postgresql?

I use postgresql. I have many databases in a server. There is one database which I use the most say 'main'. This 'main' has many tables inside it. And also other databases have many tables inside them.
What I want to do is, whenever a new row is inserted into 'main.users' table I wish to insert the same data into 'users' table of other databases. How shall I do it in postgresql? Similarly I wish to do the same for all actions like UPDATE, DELETE etc.,
I had gone through the "logical replication" concept as suggested by you. In my case I know the source db name up front and I will come to know the target db name as part of the query. So it is going to be dynamic.
How to achieve this? is there any db concept available in postgresql? Or I welcome all other possible ways as well. Please share me some idea on this.
If this is all on the same Postgres instance (aka "cluster"), then I would recommend to use a foreign table to access the tables from the "main" database in the other databases.
Those foreign tables look like "local" tables inside each database, but access the original data in the source database directly, so there is no need to synchronize anything.
Upgrade to a recent PostgreSQL release and use logical replication.
Add a trigger on the table in the master database that uses dblink to access and write the other databases.
Be sure to consider what should be done if the row alreasdy exists remotely, or if the rome server is unreachable.
Also not that updates propogated usign dblink are not rolled back if the inboking transaction is rolled back

Stop BDR from replicating DROP TABLE or CREATE TABLE

I have two databases with tables that I want to sync. I don't want to sync any other table. I'm using Postgres-BDR to do that.
Those tables are part of replication set common. There are some circumstances where other tables share a name across nodes (but are NOT in common), and a node will call DROP TABLE and then CREATE TABLE. Even though those tables aren't part of the common replication set, these commands are still replicated to the other nodes, causing the other node to lose all of its data in its table and then create an empty table.
How can I stop this? I only want commands that affect common to be replicated to the other nodes.
Nevermind, I found it. It's available with bdr.skip_ddl_replication.
I just put bdr.skip_ddl_replication = on in postgresql.conf, restarted the server, and BOOM! Works like a charm.
EDIT
It would be prudent of me to point out that the documentation warns that this option could break database replication if used improperly. But since I'll be VERY tightly controlling the table schema, it shouldn't cause any problems.

Is database deletion through Entity Framework permanent?

I have a web application using EntityFramework and an Azure SQL Database. I would like to know if deleting a row in the database removes the information permanently or simply marks is as deleted but it still be accessed if needed?
db.MyTable.Remove(objectInstance);
db.SaveChanges();
Is this someting that can be configured or do I need to implement this feature myself adding a deleted attribute?
The reason I want this is to be able to perform analytics including objects that might have been already deleted
EF has nothing to do with this actually. Whether records are deleted permanently or not is actually up to the RDBMS. EF is an ORM for the RDBMS.
Options IMO:
You manage the records marked as deleted using an extra column
You can move the deleted records to another table or file whichever is convenient for you to run analytics on. That way your queries will have to touch less number of records and be faster.
You can go through the log files and execute the INSERTs again to get the deleted records.
Hope my suggestions help you in right direction.