Alter database to match model - mysql-workbench

Originally, I used Data Modelling in MySQL Workbench to design a database consisting of a series of tables (i.e. the columns and relationships).
Then using Database -> Forward Engineer, I created a database, and inserted data into the tables.
Now I've realised that the model I've designed needs some changes, and so I've altered some tables by inserted columns. My question is, how do I get MySQL Workbench to alter the tables?
Using Database -> Synchronize Model, Update Source just generates a bunch of CREATE TABLE IF NOT EXISTS sql statements, and as the tables exist, nothing changes.

What you are looking for is in the model menu Database / Synchronize model.

As I couldn't get get File -> Export -> Forward Engineer SQL ALTER Script to work, so I made a backup of the data, dropped the tables, recreated them, and then imported the data. I'd rather find a way to get MySQL Workbench to generate ALTER commands from the changes in my model

The 2011 answer is no longer up to date. I struggled to find the option in a recent version. Here is the new procedure (works for MySQLWorkbench 6.2 at least):
When you have finished editing your model, open Database -> Synchronize with Any Source
In the step Select Source you have 3 parts
Source : choose Model Schemadata
Destination : choose Live Database Server
Send updates to : choose whether the live database should be updated or if you only want to saves the changes to a .sql file
Proceed in the wizard, you can then review the tables and sql queries that will be executed. You can also ignore the update of some tables.

Related

how to update tabular data from source tables

I have a simple test setup:
A SQL Server (2017) with one database, with one table
A SQL Server Analysis Server (2017, with compatibility level 1400)
I have created a simple tabular model in Visual Studio with one datasource (the database with one table) and one table
This is my power query:
let
Source = #"SQL/MYCOMPUTER\SQLDEV;SampleDatabase",
dbo_testTable = Source{[Schema="dbo",Item="testTable"]}[Data]
in
dbo_testTable
I have deployed this tabular model to my SSAS instance...
Now my question: if the table in my SQL Server is updated (added records), how can I see these updates reflected in the Tabular Model? Do I have to rerun the Tabular Model somehow?
I have tried "Process Table" in SSMS on the Tabular model table, but it does not get the new records...
Processing a table processes whichever dimension or fact table you selected and this will only read data from the database objects used by this table. What processing is actually performed will depend on the type of processing that you used. As far as the question in the answer you posted, Process Full on an entire Tabular model will remove all data from the deployed model, then reload everything and process the hierarchies and measures as well, so yes the new data from the underlying tables will now be in the model for all tables within it after you processed it using this option. There are multiple processing types that can either be done at the database, table, or partition level. You can view additional details on these via the Microsoft reference.
I have found that on the level of the Database in the SSAS instance, there is an option "Process Database" that has an option "Process Full", which does update all the underlying tables.
But maybe there is a better way to do this?

How to add new table to the database using sql workbench

I was creating MySQL database to add medicine.I created a table and I need to add one more tabe.After creating it I tried to query the database from the sql workbench.But it donot show the table but it is present in the EER Model.How can I solve this problem.
Modeling is just the task of abstractly designing your schema and its objects (e.g. tables, views etc.). It does not actually create these objects. For this you have to forward engineer your model to a server (see Database menu). Once done you can use the Synchronization feature to update either model or server (or both) with any changes made.
But keep in mind this is only for the objects, not for any data.

Data insert issue after migrating database from SQL Azure to SQL Server

I have a database on SQL Azure which has an identity primary.
After using SQL Server Import and Export Wizard, I transferred the data to my SQL Server 2008 R2 database.
My ASP.NET Application runs fine and reads the data. But When I try to insert a value in a table 'User', it gives me an error:
Cannot insert null in column 'UserId'.
The reason being that it is not able to generate the identity value.
How can I overcome this issue?
PS: I tried Generating the scripts from SQL Azure, but the SQL file is 500MB in size and my host does not allow that big a script to run.
Edit: using Entity Framework for data access. The UserId field has an IDENTITY property (1,1).
Edit Tried to create the schema from SQLAzure Migration tool and then used the import/export data to copy the data.
But the wizard does not maintain the relations amongst the rows.
The data import/export wizard doesn't preserve the whole structure of your database objects.
i.e. it will only copy the data, not the whole structure of the table that the data fits into - including identity and key definitions.
You could import the data, and then manually set all the primary keys and default fields to match your desired database definition, or you could connect to your Azure instance and use the generate script option to generate your schema in the 2008 database prior to copying.
But the real answer is that you should be using the Copy Database Wizard to accomplish this, which works fine with Azure. It was designed for this scenario.
The issue was the wizard was trying to insert primary key values, which is disabled by default. And without inserting the primary keys, the relationships can't be maintained, thus the whole issue.
To resolve this issue and do a foolproof migration, ensure that the new schema maintains all the identity columns.
When selecting the source and destination tables, for the specific tables, click on "Edit Mappings" and Check the "Enable identity insert" check box to enable insertion of primary key values, which keep the structure and relations intact.

How update database schema from EF5.0 model first changes?

I'm using VS2012 and EF 5.0 with a model first approach. I am wondering if there is any good way to generate incremental DDL to update model changes without dropping all the tables and losing the data I have in there already.
I like to use a SQL server data project within Visual Studio to keep my data in sync with the database - it's like a mini SQL server schema store.
Basically what we are doing here is updating the schema of the data project using the model's DDL script, then comparing and pushing those changes out to the database. Just be sure to generate your model's DDL script first.
Create a new SQL Server Database project
Right click data project and import your existing schema from the database server
Right click data project and import your generated DDL script from model first project.
Right click data project and do a schema compare of your project vs. your database server
Update database based on this schema compare (click update)
Every time you want to update your database just generate and import your models' sql script, compare, and update. It takes a couple steps but works perfectly.

How to copy everything except data from one database to another?

In T-SQL (Microsoft SQL 2008), how can I make a new database which will have the same schemas, tables, table columns, indexes, constraints, and foreign keys, but will not contain any data from the original database?
Note: making a full copy, then removing all data is not a solution in my case, since the database is quite big, and such full copy will spend too much time.
See here for instructions: How To Script Out The Whole Database In SQL Server 2005 and SQL Server 2008
In SQL Management Studio, right click on the database and select "Script database as"
http://msdn.microsoft.com/en-us/library/ms178078.aspx
You can then use the script to create an empty one.
Edit : OP did say 2008
I use liquibase for this purpose. Just point liquibase to a different server and it will use your changelog to bring the second database up to date, schema wise. It has the added benefit that the changelog file gets stored in source control and so I can have tagged versions of it, allowing me to restore a database to what a specific version of my app is expecting.