Managing Large Database Entity Models - entity-framework

I would like hear how other's are effectively (or not) working with the Visual Studio Entity Designer when many database tables exists. It seems to me that navigating the Designer is tough enough to find what you are looking for with just a few tables but how about a database with say 100 to 200 tables? When a table change is made at the database level how is the model updated? Does it overwrite any manual changes you have made to the model? How would you quickly find an entity in the designer to make a change or inspect a change? Seems unrealistic to be scrolling around looking for specific entity.
Thanks for your feedback!

I would like to say we have a fairly large number of tables (52), and i don't have major issues navigating the model, I would be more concerned with EF internally not being able to scale.
I don't use the designer i use the Model browser. There is a feature in the model browser called "Show in Designer" which can take you to any table you are searching for.
(source: microsoft.com)
In all honesty i use the Model browser more than i do the Designer. From the browser you can set pretty much anything (table mappings/ table properties).
When it comes time to update, it does not affect your conceptual model(any manual changes). Unless of course you have removed something from your physical model and it can no longer map.

Related

If the changes wiil affect moodle migration

I am using moodle 3.2 version. I did some changes in moodle database tables.For example i have added Schoolyear column in mdl_course table for my requirement purpose.When moodle migrate to next version the changes will affect or not?.
It is generally a bad idea to mess around with core Moodle database tables. It can cause problems during upgrades (and will not be included in backups, unless you change the core code as well), so it is usually better to store extra data in new tables.
That being said, there are occasions where it is really not practical to do anything else, and, usually, it does not cause too actual problems. The harder part is the merging of the core code changes that work with the changed database tables.

How can Telerik OpenAccess ORM be used in partnership with TFS?

In a number of team projects I've worked on over the past year, we have chosen the Telerik OpenAccess ORM as the tool to manage our database model. We also use TFS as our version control software
I've ran into a number of difficulties using the Telerik product (which I'll save for another day), but one of the biggest issues is when multiple team members attempt to work on the model simultaneously, and try to commit their changes to TFS. The models generated by Telerik are difficult to merge and any conflicts will, more often than not, lead to time lost fixing the entity model. The only practical way to avoid these difficulties seems to be to implement a "relay" system, where only one person at a time can work with the model; something that isn't practical in a team development environment.
Has anyone found a way to use the two tools harmoniously?
This will always be an issue when working with similar models, even the model used by the Entity Framework.
You could always switch to Code Only mappings though. Then all of the mapping for your project will be simple, merge-able code files. link

CQRS with Legacy Systems

I'm looking to convert a relatively new web-based application with a clear domain model over to more of a CQRS style system. My new application is essentially an enhanced replacement of an older existing system.
The existing systems in my organization share a set of common databases, which are updated by an untold number of applications (developed via the Chaos Method) that exist in silos throughout the company. (As it stands, I believe that no single person in the company can identify them all.)
My question is therefore about the read model(s) for my application. Since various status changes, general user data, etc. are updated by other applications outside my control, what's the best way to handle building the read models in such a way that I can deal with outside updates, but still keep things relatively simple?
I've considered the following so far:
Create Views in the database for read models, that read all tables, legacy and new
Add triggers to existing tables to update new read model tables
Add some code to the database (CLR Stored proc/etc [sql server]) to update an outside datastore for read models
Abandon hope
What is the general consensus on how to approach this? Is it folly to think I can bring order to a legacy system without fully rewriting everything from scratch?
I've used option #1 with success. Creating views to demoralize the data to create a read model is a viable option depending on the complexity of the write database(s). Meaning, if it is relatively straight forward joins that most developers can understand then I would take a closer look to see if it's viable for you. I would be careful with having too much complexity in these views.
Another thing to consider is periodic polling to build and update similar to a traditional reporting databases. Although not optimal in comparison to a notification, depending on how stale your read model can be, this might also be an option to look at.
I once was in a similar situation, the following steps was how i did it:
To improve the legacy system and achieve cleaner code base, the key is to take over the write responsibility. But don't be too ambitious as this may introduce interface/contract changing which makes the final deployment risky.
If all the write are fired through anything except direct sql updates, keep them backward compatible as possible as you can. Take them as adapters/clients of your newly developed command handlers.
Some of the write are direct sql updates but out of your control
Ask the team in charge if they can change to your new interface/contract?
If no, see step 3.
Ask if they can tolerate eventual consistency and are willing to replace the sql updates with database procedures?
If yes, put all the sql updates in the procedures and schedule a deployment and see step 4.
If no, maybe you should include them in your refactoring.
Modify the procedure, replace the sql updates with inserting events. And develop a backend job to roll the events and publish them. Make your new application subscribing these events to fire commands to your command handlers.
Emitting events from your command handlers and use them to update the tables that other applications used to consume.
Move to the next part of the legacy system.
If we had an infinitely powerful server, we wouldn't bother with view models and would instead just read from the basic entities tables. View models are meant to improve performance by preparing and maintaining an appropriate dataset for display. If you use a database View as a view model, you've really not gotten much of a performance gain over an adhoc query (if you ignore the preplanning that the sql parser can do for a view).
I've had success with a solution that's less intrusive than #Hippoom's solution, but more responsive than #Derek's. If you have access to the legacy system and can make minor code changes, you can add an asynchronous queue write to generate an event in a queueing system (RabbitMQ, Kafka, or whatever) in the legacy system repositories or wherever data is persisted. Making these writes asynch should not introduce any significant performance costs, and should the queue write fail it will not affect the legacy system. This change is also fairly easy to get through QA.
Then write an event driven piece that updates your read models. During the legacy system update phase (which can take a while), or if you only have access to some of the legacy systems that write to these databases, you can have a small utility that puts a new "UpdateViewModel" event in the queue every couple minutes. Then you would get timely events when the legacy systems save something significant, but are also covered for the systems that you are not able to update.

How to avoid having views "marked inoperative"?

How can I modify tables/views that impact other views without having those dependent views "marked inoperative"?
We're running DB2 9.5 LUW. I've read Leons Petrazickis' blog post Find a list of views marked inoperative where he says,
There are also ways to avoid it using transactions, CREATE OR REPLACE statements, and other measures.
Since we can't take advantage of the new features in 9.7 I need someone to elaborate on these other ways that Leons mentions. An example that runs in IBM Data Studio would be great.
The "CREATE OR REPLACE" functionality was added in DB2 9.7. Prior to this, the only way to avoid marking views inoperative is to drop the views before making the changes to the objects underneath the views, and recreate the views after.
Or, avoid making changes to the views' dependent objects. :-)

Version control of databases

I am curious if there are any solutions out there, preferably free, that can have a central database to publish data to in a versioned manner.
For example,
Client 1 decides to edit a persons profile so it gets a local copy on its machine to make changes to. When they are happy with there edit they publish the results to the central database. Just like how you would do a submit in perforce.
Client 2 tries to edit the same local copy but when they go to submit they have to resolve conflicts.
The central database must store compressed differences between versions of the data.
At any point someone can look at all versions of the data submitted.
Check out OffScale DataGrove.
This product tracks changes to the entire DB - schema and data. You can tag versions in any point in time, and return to older states of the DB with a simple command. It also allows you to create virtual, separate, copies of the same database so each team member can have his own separate DB. All the virtual copies are tracked into the same repository so it's super-easy to revert your DB to someone else's version (you simply check-out their version, just like you do with your source control). This means all your DBs can always be synchronized.
Disclaimer - I work at OffScale :-)
"Version control of databases" is a bit ambiguous for a title, because you are actually asking for a VCS using a database as repository "data store".
Subversion has such a model (either Berkeley DB or filesystem-based).
It also has a Copy-Modify-Merge model which is similar to the kind of locking mechanism you are describing.
(source: red-bean.com)
(source: red-bean.com)
The sql tools from redgate sort of offer some of this functionality, but not implemented in a way you describe. For example, sql data compare can compare the differences between data in 2 databases, and sql source control can be used as well.
However, getting a copy of the database on a local machine, making changes and resubmitting would be more of a manual process.
What database server are you using? If you are using MySQL and PHP, Doctrine has 'Versionable' behavior which can be applied to a model.
The documentation on this behavior is here:
http://www.doctrine-project.org/projects/orm/1.2/docs/manual/behaviors/en#core-behaviors:versionable
This is exactly what my product (yes I'm biased :)) DBmaestro Teamwork does.
It enforces and keep track on the changes of structure and content
It prevents two parallel changes on an object structure or content by two (as long they work on the same object - meaning, same database, same schema, ...)
It uses a baseline aware analysis which understand the nature of the change and knows if the change should be promoted or should be ignored (as it was made from another environment) or if there is a conflict
And much moreā€¦
I would encourage you to read a comprehensive, unbiased review on Database Enforced Management Solution by veteran Database expert Ben Taylor which he posted on LinkedIn https://www.linkedin.com/pulse/article/20140907002729-287832-solve-database-change-mangement-with-dbmaestro