I have to write a solution that uses different databases with different structure from the same code. So, when a user logs to the application I determine to which database he/she is connected to at runtime. The user can create tables and columns at any time and they have to see the change on the fly. The reason that I use one and the same code the information is manipulates the same way for the different databases. How can I accomplish this at runtime? Actually is the Entity Framework a good solution for my problem?
Thanks in advance.
You can do this with EF 4 using a code-first model. That said, I tend to avoid changing DB metadata on the fly, with or without EF. Rather, I'd choose a schema which fits the user's changing needs.
Related
I’m looking forward to creating a multitenant-based SASS application. I have defined database design like each tenant using different databases (Postgres) with standard objects(tables) like contact, and accounts. So far clean, I can see many SaaS application supports Custom Object(tables), where customer can create their own objects in real-time and required columns. I would want to support the same. Could someone please explain the backend logic behind that? How can we add new tables for custom objects in the database and refresh the DbContext entity at runtime?
Note: I’m aware for custom fields, many choice JSON-type columns in Postgres, it opens ways to add as many custom columns as JSON type in existing tables. But don’t find any recommended way to do custom object support.
EF doesn't really support adding tables at runtime. You can use ADO.NET queries to work with tables whose schemas aren't known at design-time.
I got an existing database with many tables which are accessed using stored procedures only (no O/RM). I'd like to create new tables in this database using Entity Framework and the Code First approach.
Do all the tables in my existing database need to be modelized in my Entity Framework classes? Will I be able to hand-code only the new classes I need in my DbContext? Other tables really need to stay untouched and away from O/RM for the moment.
Note: I'm going to be using the latest EF5.
As for now the Power Tools only allow you to reverse engineer all tables and views in the DB, which can be a problem if you have a big DB, with hundreds of objects, you do not want to reverse engineer.
However, I found an easy workaround for that:
Create a new technical user for the reverse engineering. To this user you only grant permission to the tables and views, that you want to be reverse engineered.
Have fun!
You are under no obligation to map any given table with EF. If you already have a database, you may want to consider reverse-engineering your database with the EF Power Tools available from Microsoft. I did this recently with a MySQL database that I had for testing purposes and it worked quite well!
If you are new to EF an advantage is that the PowerTools write a ton of code for you, which will help you get a grasp on the syntax of Code First. You will need to modify the output but it is a great start. I really believe that this approach will give you the least headache.
The EF PowerTools can be found here: http://visualstudiogallery.msdn.microsoft.com/72a60b14-1581-4b9b-89f2-846072eff19d/
I like working with the entity framework for many reasons- the ease of use of the entity designer, the power of linq, and the ease of binding.
Occasionally I want to build a simple app that doesnt need to use a database, but still needs to work with data and display it on screen, in grids etc, so I'd like to just create a quick EF model and use it for this, but it doesnt seem to work very will with just using it for local data.
My question is- is there a correct usage of the EF for working with local data, and perhaps then just serialize/deserialize the whole context to a file? Or is this just too much effort to make work properyly? I used to use Datasets in this way, along with Linq to Dataset, and it works well... So perhaps those are still the better way to go for this scenario?
Yes you can use entity framework as local, and also access the data that is currently in-memory, read details as link below:
http://msdn.microsoft.com/en-us/data/jj592872.aspx
I don't know what you mean by "local data" exactly (sounds like it's not a database), but I think the Datasets vs. EF portion of your post is (for me) the real question.
EF is great when you need to model robust business logic, are implementing a Domain Model pattern, using Domain Driven Design, etc: basically any scenario where a Table Module or Active Record pattern is inappropriate.
When you just need to display some grids of data, and the business logic is very simple, Datasets are definitely the way to go (in my experience).
I’m considering using Entity Framework for a project. I’m trying to understand how I can configure EF to work with a database environment that is configured with a read server and a write server.
All updates to the write server will be replicated over to the read servers.
My questions are:
Do I need to generate different data models for the two environments?
Can I reuse the same data model?
Is there something built into EF itself that will allow for this?
Thanks
You can reuse the same model because you can instantiate an ObjectContext with any connection string you like. AFAIK you can even switch out the connection later, so you can use two different server connections with the same EntityContext.
If the data models are identical, then no.
Du to #1 being no, #2 is automatically yes.
EF just requires you pass it a different connection string
However...
I'm not really sure how you intend to make this work. You will lose virtually benefit of an ORM, you can't do change tracking, and you will have a lot of problems not reading from the write model.
Frankly, I'm not sure how you could make this work with any ORM.
I wonder in which cases it would be good to make an NSManagedObjectModel completely programmatically, with NSEntityDescription instances and all this stuff.
I'm that kind of person who prefers to code programmatically, rejecting Interface Builder. But when it comes to Core Data, I have a hard time figuring out why I should kill my time NOT using the nice Xcode Data Modeler tool.
And since data models are stuck to a given state (except when you want to do some ugly migration operations where thinks probably go wrong and users get mad, really mad), I see no big sense in a data model that's made programmatically for the purpose of changing it all the time.
Did I miss something?
I dont think you missed anything. The only reason I can see to create your model programatically would be if the objects you are modeling are themselves dynamic: you could for instance build a coredata entity (or graph of entities) in response to a web service which changed over time, or was selected by the user. However, I think if you had that or a similar use case, you wouldn't need to write this question (and you'd probably solve it a different way anyway)
So, if your application is dealing with resources that are dynamic, as #Andiih mentioned, then this programatic is the only way to do it. I don't know what my core data entities are until runtime, I don't know what the attributes are, or what the data looks like. So, I ask the server to give me the kinds of resources I should support and what their attributes look like. I build the model, the entities, the properties, the relationships - at runtime. I still want to use Core Data because I'm dealing with a lot of data and I need the benefit of efficient memory management with NSFetchedResultsController, etc. I can only do this programmatically.
The trouble is how to handle migration to try and preserve as much of the persistent store as possible, to reduce the size of the networked data payload after the model changes. Right now I blast the whole model and the persistent store if there's a conflict. I haven't yet figured out a way to create an .xcdatamodel from a programmatically generated model, thus I can't yet create a version mapping to do the migration.
Everything is a trade off. Basically, I think IB and the visual Core Data modeler are the right tool if you're building a simple application. You'll need to make the determination when your application becomes large/complex enough that you prefer to have direct control over all aspects of the code.
Regarding Interface Builder, if you have an application with a variety of complex interactions between view controllers, and multiple custom controls, I find code more appropriate.
For Core Data, the question is pretty much the same. Does your project have a defined scope? Can you foresee everything in that scope being done within the visual modeler? If so, it's probably fine. For other projects, where you may be asked to add features on an ongoing basis, perhaps it's better to spend a little more time writing it out so you have more flexibility later.
One other thing to consider, that doesn't get mentioned much, is it's MUCH harder to ask for help with IB or any hybrid visual design/code system. When something does go wrong, or you need help, it's way easier to post your code, than try to explain what's going on in a visual modeler.
In general, there's no reason to build the managed object model in code. There's nothing you can do in code that can't be done in the model editor. There are some fancy tricks you can do in code, however, to work with multiple models. For example, you can merge two models, establishing cross-model relationships between entities in those models at load time (see Cross-model relationships in NSManagedObjectModel from merged models?).
Regarding whether it's a good idea to code or use the graphical editor, I think the balance tips heavily towards the graphical editor in this case. Being able to verify the model by visual inspection instead of (rather convoluted) code is a win. The model can still be verified by unit test, if you desire.
I have one use case that might be valid, what if you load some data from the internet whether it is XML from an RSS Feed or WSDL response, then flatten those responses into a tabular from generating an in memory data table and finally mash it all up into a single coherent data model, then you can create the entities for those in memory data tables and create master/detail relationships. That's one case I think Core Data data model generated programmatically could become handy and a powerful feature.
I've changed models programmatically in unit tests. For example, I wrote a class that is designed to work with Core Data models that have a particular protocol attached. Instead of testing against a random implementation, I mutated the default model by adding one just in the unit tests programmatically, and tested against that test-only model.