In one of my project EF 5.0 generate POCO entities for me under abcModel.tt, This is fine as I can use these in my project but if I apply validation in it then it get lost when I update .edmx from DB. so as a solution I also build all the entities by hand as well
Student.cs Generated by EF POCO
MyStudent.cs Generate by Me
One another reason to build MyStudent.cs is that DB column names are not well
written for example
Generate by EF
Student.cs
{
public int sid; // not good name due to table column name
public string sfname; // not good name due to table column name
public string slname; // not good name due to table column name
}
Build by me
MyStudent.cs
{
public int StudentId;
public string FirstName;
public string LastName;
}
So I like to know is my approach to have dual entities is ok?
Note:
I cannot change table column names because db is too big & already build by dba.
Only solution/suggestion with Data First approach is require.
Thanks
Sure, you can do this. I recommend creating a Data Transfer library and mapping your objects (like with AutoMapper). Then when you need to retrieve or save data to/from the database, call your DT and return a business object (your other class).
Something like this:
public class StudentDT
{
public StudentBO GetStudent(int id)
{
using (var db = new dbContext())
{
var studentDB = db.Students.First(s => s.sid == id);
StudentBO sbo = Mapper.Map<StudentBO>(studentDB);
return sbo;
}
}
public void SaveStudent(StudentBO sbo)
{
using (var db = new dbContext())
{
var sdb = Mapper.Map<Student>(sbo);
db.Students.Add(sdb);
db.SaveChanges();
}
}
}
Related
Small amount of context, I have been using NHibernate mapping by code for a few years, the last few months I have started using Entity Framework Core.
I'm trying to understand why I have to null child objects to stop them inserting new records. I'm not sure if its an understanding issue on my part or if this is how Entity Framework works.
I have two classes, Command and CommandCategory. Command has a single CommandCategory and CommandCategory can have many commands. For example, The command "set timeout" would go under the "Configuration" category. Similarly, the "set URL" command would also go under the "Configuration" category.
class Command
{
public Guid Id { get; set; }
public string Name { get; set; }
public string CommandString { get; set; }
public Guid CommandCategoryId { get; set; }
public CommandCategory CommandCategory { get; set; }
}
class CommandCategory
{
public CommandCategory(string id, string name)
{
Id = Guid.Parse(id);
Name = name;
Commands = new List<Command>();
}
public Guid Id { get; set; }
public string Name { get; set; }
public ICollection<Command> Commands { get; set; }
}
My DbContext is setup like so:
class EfContext : DbContext
{
private const string DefaultConnection = "XXXXX";
public virtual DbSet<Command> Command { get; set; }
public virtual DbSet<CommandCategory> CommandCategory { get; set; }
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
if (!optionsBuilder.IsConfigured)
{
optionsBuilder.UseSqlServer(DefaultConnection);
optionsBuilder.EnableSensitiveDataLogging();
}
}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Command>()
.HasOne(x => x.CommandCategory)
.WithMany(x => x.Commands);
}
}
Then here is the code that actually runs it all. First I call Add(). Add creates a new Command and adds it to the database. It also creates a CommandCategory called "Configuration" and inserts both correctly.
Next I call AddWithExisting(). This will create a new Command but using the existing CommandCategory. When it tries to add to the database, it first inserts the Command and then it tries to insert the CommandCategory. Because the CommandCategory.Id already exists, and its setup as the primary key, this then fails as it's a duplicate key. To get around this I have to make sure the CommandCategory property on the Command object is set to null. This will then only insert the Command to the database and not the CommandCategory object.
I know usually you wouldn't create a new CommandCategory object, but in this instance I am simulating the object coming up from the client via an ApiController. My application sends data back and forth via WebApi so the object is basically being created new when a request is made.
Nulling the property seems like a strange thing to do, I thought the point of Object-relational mapping was to not have to deal with individual properties like this.
Is this how its supposed to function or am I doing something wrong?
class Program
{
static void Main(string[] args)
{
var dbContext = new EfContext();
Add(dbContext);
AddWithExisting(dbContext);
Console.WriteLine("Hello World!");
}
private static void Add(EfContext dbContext)
{
var newCommand = new Command();
newCommand.Id = Guid.NewGuid();
newCommand.Name = "set timeout";
newCommand.CommandString = "timeout:500;";
var newCommandCategory = new CommandCategory("8C0D0E31-950E-4062-B783-6817404417D4", "Configuration");
newCommandCategory.Commands.Add(newCommand);
newCommand.CommandCategory = newCommandCategory;
dbContext.Command.Add(newCommand);
dbContext.SaveChanges();
}
private static void AddWithExisting(EfContext dbContext)
{
var newCommand = new Command();
newCommand.Id = Guid.NewGuid();
newCommand.Name = "set URL";
newCommand.CommandString = "url:www.stackoverflow.com";
// this uses the same Id and Name as the existing command, this is to simulate a rest call coming up with all the data.
var newCommandCategory = new CommandCategory("8C0D0E31-950E-4062-B783-6817404417D4", "Configuration");
newCommandCategory.Commands.Add(newCommand);
// If i don't null the below line, it will insert to the database a second time
newCommand.CommandCategory = newCommandCategory;
newCommand.CommandCategoryId = newCommandCategory.Id;
dbContext.Command.Add(newCommand);
dbContext.SaveChanges();
}
This is by design, you can do two things here:
You can look up the existing command category from the DB and set that as the property (as this object is 'attached' to the DB context, it won't create a new one).
Just set the ID of the command category on the command.
e.g.
newCommand.CommandCategory = dbContext.CommandCategories.Find("8C0D0E31-950E-4062-B783-6817404417D4");
or
newCommand.CommandCategoryId = new Guid("8C0D0E31-950E-4062-B783-6817404417D4");
At the minute, it is seeing a new command category (not attached) so is trying to create it.
EF doesn't perform InsertOrUpdate checks. Entities are tracked by a DbContext as either Added or Updated. If you interact with a tracked entity or "Add" an entity to the DbContext, all untracked related entities will be recognized as Added, resulting in an insert.
The simplest advice I can give is to give EF the benefit of the doubt when it comes to entities and don't try to premature optimize. It can save headaches.
using (var dbContext = new EfContext())
{
var newCommand = Add(dbContext);
AddWithExisting(newCommand, dbContext);
dbContext.SaveChanges();
Console.WriteLine("Hello World!");
}
private static command Add(EfContext dbContext)
{
var newCommand = new Command
{
Id = Guid.NewGuid(), // Should either let DB set this by default, or use a Sequential ID implementation.
Name = "set timeout",
CommandString = "timeout:500;"
};
Guid commandCategoryId = new Guid("8C0D0E31-950E-4062-B783-6817404417D4");
var commandCategory = dbContext.CommandCategory.Where(x => x.CommandCategoryId == commandCategoryId);
if(commandCategory == null)
commandCategory = new CommandCategory
{
Id = commandCategoryId,
Name = "Configuration"
};
newCommand.CommandCategory = commandCategory;
dbContext.Command.Add(command);
return command;
}
private static Command AddWithExisting(Command command, EfContext dbContext)
{
var newCommand = new Command
{
Id = Guid.NewGuid(),
Name = "set URL",
CommandString = "url:www.stackoverflow.com",
CommandCategory = command.CommandCategory
};
dbContext.Commands.Add(newCommand);
return newCommand;
}
So what's changed here?
First the DbContext reference is Disposable, so it should always be wrapped with a using block. Next, we create the initial Command, and as a safety measure to avoid an assumption we search the context for an existing CommandCategory by ID and associate that, otherwise we create the command category and associate it to the Command. 1-to-many relationships do not need to be bi-directional, and even if you do want bi-directional relationships you don't typically need to set both references to each other if the mappings are set up correctly. If it makes sense to ever load a CommandCategory and navigate to all commands using that category then keep it, but even to query all commands for a specific category, that is easy enough to query from the command level. Bi-directional references can cause annoying issues so I don't recommend using them unless they will be really necessary.
We return the new command object back from the first call, and pass it into the second. We really only needed to pass the reference to the commandcategory loaded/created in the first call, but in case it may make sense to check/copy info from the first command, I used this example. We create the new additional command instance and set it's command category reference to the same instance as the first one. I then return the new command as well. We don't use that reference to the second command. The important difference between this and what you had tried is that the CommandCategory here points to the same reference, not two references with the same ID. EF will track this instance as it is associated/added, and wire up the appropriate SQL.
Lastly note that the SaveChanges call is moved outside of the two calls. Contexts generally should only ever save changes once in their lifetime. Everything will get committed together. Having multiple SaveChanges is usually a smell when developers want to manually wire up associations when keys are autogenerated by a DB. (Identity or defaults) Provided relationships are mapped correctly with navigation properties and their FKs, EF is quite capable of managing these automatically. This means that if you set up your DB to default your Command IDs to newsequentialid() for instance and tell EF to treat the PK as an Identity column, EF will handle this all automatically. This goes for associating those new PKs as FKs to related entities as well. No need to save the parent record so the parent ID can be set in the child entities, map it, associate them, and let EF take care of it.
I have a problem creating a related entity in Entity Framework Core 2.0. I've just created the solution, consisting of an Asp.Net Core backend project, and a UWP project to act as client. Both solutions share model. The two models are:
public class UnitOfWork {
public int UnitOfWorkId { get; set; }
public string Name { get; set; }
public Human Human { get; set; }
}
public class Human {
public int HumanId { get; set; }
public string Name { get; set; }
public List<UnitOfWork> WorkDone { get; set; }
}
As you can see, model is very simple. One human has many units of work. By the way, the backend is connected to an Azure SQL database. I've seen the migration classes, and the database schema looks good to me.
The problem I have is when I want to create a unit of work referencing an existing human, using HTTP. The controller is fairly simple:
[HttpPost]
public UnitOfWork Post([FromBody] UnitOfWork unitOfWork) {
using (var db = new DatabaseContext()) {
db.UnitsOfWork.Add(unitOfWork);
var count = db.SaveChanges();
Console.WriteLine("{0} records saved to database", count);
}
return unitOfWork;
}
Again, nothing fancy here.
How can I create an unit of work, and assign it to an existing human? If I try it with an existing human, in this way
var humans = await Api.GetHumans();
var firstHuman = humans.First();
var unitOfWorkToCreate = new UnitOfWork() {
Name = TbInput.Text,
Human = firstHuman,
};
I get this error:
Cannot insert explicit value for identity column in table 'Humans' when IDENTITY_INSERT is set to OFF
I feel that setting IDENTITY_INSERT to ON will solve my problem, but this is not what I want to do. In the client, I'll select an existing human, write down a name for the unit of work, and create the latter. Is this the correct way to proceed?
EDIT: Following #Ivan Stoev answer, I've updated the UnitOfWork controller to attach unitofwork.Human. This led to
Newtonsoft.Json.JsonSerializationException: 'Unexpected end when deserializing array. Path 'human.workDone', line 1, position 86.'
Investigating - seen here - EFCore expects to create collections (like human.WorkDone) in the constructor, so I did it, and no more nulls deserializing. However, now I have a self-referencing loop:
Newtonsoft.Json.JsonSerializationException: Self referencing loop detected with type 'PlainWorkTracker.Models.UnitOfWork'. Path 'human.workDone'.
Any ideas? Thanks!
The operation in question is falling into Saving Disconnected Entities category.
Add methods marks all entities in the graph which are not currently tracked as new (Added) and then SaveChanges will try to insert them in the database.
You need a way to tell EF that unitOfWork.Human is an existing entity. The simplest way to achieve that is to Attach it (which will mark it as Unchanged, i.e. existing) to the context before calling Add:
db.Attach(unitOfWork.Human);
db.Add(unitOfWork);
// ...
I am using the .NetCore Entity Framework for the first time and want to know if the it is possible to generate a custom model.
When setting up the app, EF created all the models from the database. That is fine and expected.
However, I now created a new controller that returns data that is the result of a complicated linq query.
All my other controllers return a model like this:
return View(characterList);
where characterList is an actual model of a database table.
But how would I create a brand new custom model that does not represent any table in the database?
Thanks!
You would first simply create the model you want to have in your code.
For example:
Public class NewModel {
Public String Test {get; set;}
}
Then you can use your context and the power of linq/select to query in your new model.
Something like this:
List<NewModel> list = dbContext.Set<OldModel>().Where(...).Select<NewModel>(x=> new NewModel(){ Test = x.OldTestString }).ToList()
And so you get a list of the new model. You could e.g. include other tables and join them in the query to make it more complicated. But this example should give you a starting point.
If the model you are explaining is supposed to be used only for the views consider creating a ViewModel which is basically a class that contains only the properties needed for the view usually without any logic or only with a logic immediatelly necessary for displaying in a view.
For example, you'd create a new class, let's say CharacterVM
public class CharacterVM
{
public string Name{ get; set; }
public string CharacterType {get; set; }
public bool Invincible{ get; set; }
}
In your view you'd use CharacterVM which has all the properties exposed in the CharacterVM class
#model CharacterVM
The most important step is remapping the properties from your database model (let's say it is called Character) where all you have to do in that case is to remap the properties of the Character to the properties of the new instance of CharacterVM you'd pass to the view.
public IActionResult Index(int idCharacter)
{
var character = db.Characters.SingleOrDefault(c => c.idCharacter == idCharacter);
var characterVM = new CharacterVM()
{
Name = character.Name,
CharacterType = character.Type.Name,
Invincibility = false
};
return View(characterVM);
}
I am using the repository pattern to provide access to and saving of my aggregates.
The problem is the updating of aggregates which consist of a relationship of entities.
For example, take the Order and OrderItem relationship. The aggregate root is Order which manages its own OrderItem collection. An OrderRepository would thus be responsible for updating the whole aggregate (there would be no OrderItemRepository).
Data persistence is handled using Entity Framework 6.
Update repository method (DbContext.SaveChanges() occurs elsewhere):
public void Update(TDataEntity item)
{
var entry = context.Entry<TDataEntity>(item);
if (entry.State == EntityState.Detached)
{
var set = context.Set<TDataEntity>();
TDataEntity attachedEntity = set.Local.SingleOrDefault(e => e.Id.Equals(item.Id));
if (attachedEntity != null)
{
// If the identity is already attached, rather set the state values
var attachedEntry = context.Entry(attachedEntity);
attachedEntry.CurrentValues.SetValues(item);
}
else
{
entry.State = EntityState.Modified;
}
}
}
In my above example, only the Order entity will be updated, not its associated OrderItem collection.
Would I have to attach all the OrderItem entities? How could I do this generically?
Julie Lerman gives a nice way to deal with how to update an entire aggregate in her book Programming Entity Framework: DbContext.
As she writes:
When a disconnected entity graph arrives on the server side, the
server will not know the state of the entities. You need to provide a
way for the state to be discovered so that the context can be made
aware of each entity’s state.
This technique is called painting the state.
There are mainly two ways to do that:
Iterate through the graph using your knowledge of the model and set the state for each entity
Build a generic approach to track state
The second option is really nice and consists in creating an interface that every entity in your model will implement. Julie uses an IObjectWithState interface that tells the current state of the entity:
public interface IObjectWithState
{
State State { get; set; }
}
public enum State
{
Added,
Unchanged,
Modified,
Deleted
}
First thing you have to do is to automatically set the state to Unchanged for every entity retrieved from the DB, by adding a constructor in your Context class that hooks up an event:
public YourContext()
{
((IObjectContextAdapter)this).ObjectContext
.ObjectMaterialized += (sender, args) =>
{
var entity = args.Entity as IObjectWithState;
if (entity != null)
{
entity.State = State.Unchanged;
}
};
}
Then change your Order and OrderItem classes to implement the IObjectWithState interface and call this ApplyChanges method accepting the root entity as parameter:
private static void ApplyChanges<TEntity>(TEntity root)
where TEntity : class, IObjectWithState
{
using (var context = new YourContext())
{
context.Set<TEntity>().Add(root);
CheckForEntitiesWithoutStateInterface(context);
foreach (var entry in context.ChangeTracker
.Entries<IObjectWithState>())
{
IObjectWithState stateInfo = entry.Entity;
entry.State = ConvertState(stateInfo.State);
}
context.SaveChanges();
}
}
private static void CheckForEntitiesWithoutStateInterface(YourContext context)
{
var entitiesWithoutState =
from e in context.ChangeTracker.Entries()
where !(e.Entity is IObjectWithState)
select e;
if (entitiesWithoutState.Any())
{
throw new NotSupportedException("All entities must implement IObjectWithState");
}
}
Last but not least, do not forget to set the right state of your graph entities before calling ApplyChanges ;-) (You could even mix Modified and Deleted states within the same graph.)
Julie proposes to go even further in her book:
you may find yourself wanting to be more granular with the way
modified properties are tracked. Rather than marking the entire entity
as modified, you might want only the properties that have actually
changed to be marked as modified.
In addition to marking an entity as modified, the client is also
responsible for recording which properties have been modified. One way
to do this would be to add a list of modified property names to the
state tracking interface.
But as my answer is already too long, go read her book if you want to know more ;-)
My opinionated (DDD specific) answer would be:
Cut off the EF entities at the data layer.
Ensure your data layer only returns domain entities (not EF entities).
Forget about the lazy-loading and IQueryable() goodness (read: nightmare) of EF.
Consider using a document database.
Don't use generic repositories.
The only way I've found to do what you ask in EF is to first delete or deactivate all order items in the database that are a child of the order, then add or reactivate all order items in the database that are now part of your newly updated order.
So you have done well on update method for your aggregate root, look at this domain model:
public class ProductCategory : EntityBase<Guid>
{
public virtual string Name { get; set; }
}
public class Product : EntityBase<Guid>, IAggregateRoot
{
private readonly IList<ProductCategory> _productCategories = new List<ProductCategory>();
public void AddProductCategory(ProductCategory productCategory)
{
_productCategories.Add(productCategory);
}
}
it was just a product which has a product category, I've just created the ProductRepository as my aggregateroot is product(not product category) but I want to add the product category when I create or update the product in service layer:
public CreateProductResponse CreateProduct(CreateProductRequest request)
{
var response = new CreateProductResponse();
try
{
var productModel = request.ProductViewModel.ConvertToProductModel();
Product product=new Product();
product.AddProductCategory(productModel.ProductCategory);
_productRepository.Add(productModel);
_unitOfWork.Commit();
}
catch (Exception exception)
{
response.Success = false;
}
return response;
}
I just wanted to show you how to create domain methods for entities in domain and use it in service or application layer. as you can see the code below adds the ProductCategory category via productRepository in database:
product.AddProductCategory(productModel.ProductCategory);
now for updating the same entity you can ask for ProductRepository and fetch the entity and make changes on it.
note that for retrieving entity and value object of and aggregate separately you can write query service or readOnlyRepository:
public class BlogTagReadOnlyRepository : ReadOnlyRepository<BlogTag, string>, IBlogTagReadOnlyRepository
{
public IEnumerable<BlogTag> GetAllBlogTagsQuery(string tagName)
{
throw new NotImplementedException();
}
}
hope it helps
I'm hoping to set up an EF Code First convention where all of the column names of the properties have a lowercase first letter.
However, I have other fluent API code that changes column names from the default. I can't seem to find a way to get access to the current column name of a property in order to lowercase the first letter. Starting with the PropertyInfo, as in modelBuilder.Properties() is not enough because the column name may have already been set to be different than the member name.
How do I generically tell EF Code First to lowercase the first letter of all column names?
OK, the DBA's are speaking. Let's bow our heads in reverence and see what we can do. I'm afraid that in EF 5 (and lower) there's not much you can do to make it easy. In EF 6 there is this feature of Custom Code First Conventions which actually make it a piece of cake. I just tried a small sample:
// Just some POCO
class Person
{
public int PersonId { get; set; }
public string PersonName { get; set; }
}
// A custom convention.
class FirstCharLowerCaseConvention : IStoreModelConvention<EdmProperty>
{
public void Apply(EdmProperty property, DbModel model)
{
property.Name = property.Name.Substring(0, 1).ToLower()
+ property.Name.Substring(1);
}
}
class MyContext : DbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<Person>();
// Add the convention to the modelbuilder.
modelBuilder.Conventions.Add(new FirstCharLowerCaseConvention());
base.OnModelCreating(modelBuilder);
}
}
After running
using (var db = new MyContext())
{
db.Database.Create();
}
my database has a People table with personId and personName.
And some simple CRUD actions work flawlessly:
using (var db = new MyContext())
{
var p = new Person { PersonName = "Another Geek" };
db.Set<Person>().Add(p);
db.SaveChanges();
}
using (var db = new MyContext())
{
var x = db.Set<Person>().ToList();
}
So if the DBA's want their conventions, you can demand a new toy :)