Using EFCore 3.1 with the library EFCore.BulkExtensions 3.6.1 (latest version for EFCore 3.1).
Database server is SQL Server 2019.
Here is code to reproduce the error.
A simple Customer class with a navigation property from another class:
public class Customer
{
public int ID { get; set; }
public String Name { get; set; }
public Cont Continent { get; set; }
}
public class Cont
{
public int ID { get; set; }
public String Name { get; set; }
}
When I try to insert entities into Customers with populated navigation properties
using the "BulkInsert" method from the EFCore.BulkExtension library, the value of the navigation props do not get saved to the database:
Customer cust1 = new Customer
{
Continent = contList.Find(x => x.Name == "Europe"),
Name = "Wesson Co, Ltd."
};
Customer cust2 = new Customer
{
Continent = contList.Find(x => x.Name == "Asia"),
Name = "Tiranha, Inc."
};
// (checking the "Continent" props here shows them to be properly populated)
List<Customer> CustomerList = new List<Customer> { cust1, cust2 };
dbContext.BulkInsert(CustomerList);
The result is that the "ContinentID" column in the database is NULL.
Alternate way, as usual with the EF Core SaveChanges() works - change the last two lines to:
dbContext.Customers.AddRange(cust1, cust2);
dbContext.SaveChanges();
This works totally fine. But I have to insert a million records and SaveChanges() has a horrible performance for that scenario.
Is there anything I am doing wrong?
Using another (lower) version of the BulkExtension does not help. Higher versions won't work as they all target EFCore 5 with NetStandard 2.1 which my project does not currently support.
Could not find any hint or mention of navigation props related info in the EFCore.BulkExtension documentation.
Looking for what SQL is being sent only shows me a query like this
INSERT INTO dbo.Customers (ContinentID, Name) VALUES #p1, #p2
so it is up to BulkExtensions.BulkInsert() to place the values correctly, which it seemingly does not.
The point is that similar code has been working for 6 months, and now with a simple scenario as the above it won't, for any version of the BulkExtension library. So it is likley there must be something wrong with my code or my approach, but cannot find it.
UPDATE
Downgrading the package EFCore.BulkExtensions to 3.1.6 gives me a different error. Still does not work but here is the error:
System.InvalidOperationException : The given value 'Customer' of type String from the data source cannot be converted to type int for Column 2 [ContinentID] Row 1.
----> System.FormatException : Failed to convert parameter value from a String to a Int32.
----> System.FormatException : Input string was not in a correct format.
As it stands right now, this is a bug in the EFCore.BulkExtensions library - versions 3.2.1 through 3.3.5 will handle it (mostly) correctly, versions 3.3.6 - 3.6.1 do not.
Use version 3.3.5 for the most stable result, as of this writing.
(No data on version 5.x for EFCore 5)
Related
I am getting SQL errors when trying to use REST to get to FSAppointmentDet.InventoryID, either as a Field Service service item or as an Inventory Item.
The InventoryID field exists in the table, however, it looks like the DACs have been inherited, for example as FSAppointmentDetService.
Other fields work, it just seems that the fields with an ID are causing the SQL error.
In this case, the SQL error is a multi-step identifier not found. Running a SQL Profiler trace and looking at the SQL, it looks like the table has been aliased in one part of the query and not in another. Obviously this is occurring at a level much lower than we can get to, so looking for a workaround or ideas on how to get the InventoryID for Field Service detail records.
I've seen this happen when one DAC herits (herits as in class inheritance not extend as in DAC extension) from another DAC without redeclaring it's key fields. The way to fix that is to add the parent keys abstract class fields in the children.
FSAppointmentDetService seems to be missing AppointmentID key declaration. When the ORM builds the SQL query it generates Alias for the herited DAC but it gets confused becaused the key fields of the parent were not all re-declared in the child.
In FSAppointmentDet you have 2 key fields:
#region AppointmentID
public abstract class appointmentID : PX.Data.IBqlField
{
}
[PXDBInt(IsKey = true)]
[PXParent(typeof(Select<FSAppointment, Where<FSAppointment.appointmentID, Equal<Current<FSAppointmentDet.appointmentID>>>>))]
[PXDBLiteDefault(typeof(FSAppointment.appointmentID))]
[PXUIField(DisplayName = "Appointment Nbr.")]
public virtual int? AppointmentID { get; set; }
#endregion
#region AppDetID
public abstract class appDetID : PX.Data.IBqlField
{
}
[PXDBIdentity(IsKey = true)]
public virtual int? AppDetID { get; set; }
#endregion
But in FSAppointmentDetService only one of them is redeclared. Notice how it's using 'override' to redeclare compared to FSAppointmentDet which do not override:
#region AppDetID
public new abstract class appDetID : PX.Data.IBqlField
{
}
[PXDBIdentity(IsKey = true)]
public override int? AppDetID { get; set; }
#endregion
In this case we can't add field to that DAC though because it's part of the base product. I think it would be possible to create a new DAC that herits from FSAppointmentDetService, add the missing key in there and use that new herited DAC instead of FSAppointmentDetService.
However I don't know if that would be possible when working with Web Services. If not the change will have to be made in Acumatica base product. You could fill a bug report with Acumatica support to have that done in future versions.
I have an MVC application that uses entity framework / code first. I'm trying to set up always encrypted in order to encrypt a column (social security number / SSN). I'm running everything in Azure, including using Azure vault to store keys.
I have two models, SystemUser and Person. SystemUser is essentially an account / login which can administer 1 or more People.
The definitions look a bit like:
public class Person
{
[StringLength(30)]
[Column(TypeName = "varchar")]
public string SSN { get; set; } // Social Security Number
...
[Required, MaxLength(128)]
public string SystemUserID { get; set; }
[ForeignKey("SystemUserID")]
public virtual SystemUser SystemUser { get; set; }
...
}
public class SystemUser
{
...
[ForeignKey("SystemUserID")]
public virtual HashSet<Person> People { get; set; }
...
}
I have a very basic page set up that just looks up a user and prints out their SSN. This works. I then adapted the page to update SSN and this also works. This to me implies that the Always Encrypted configuration and Azure Vault is set up correctly. I've got "Column Encryption Setting=Enabled" in the connection string and I encrypted the column SSN using SSMS (I'm using deterministic).
In my SystemUser class I have the following method as an implementation for Identity:
public async Task<ClaimsIdentity> GenerateUserIdentityAsync(UserManager<SystemUser> manager)
{
...
if (this.People.Any())
{
...
}
...
}
This is used for user logins. Running the code results in a:
System.Data.Entity.Core.EntityCommandExecutionException: An error
occurred while executing the command definition. See the inner
exception for details. ---> System.Data.SqlClient.SqlException:
Operand type clash: varchar is incompatible with varchar(30) encrypted
with (encryption_type = 'DETERMINISTIC', encryption_algorithm_name =
'AEAD_AES_256_CBC_HMAC_SHA_256', column_encryption_key_name =
'CEK_Auto11', column_encryption_key_database_name = 'xxx')
collation_name = 'Latin1_General_BIN2'
It seems to fail on the line above "if (this.People.Any())". Putting a break point just before that line reveals the following about this.People:
'((System.Data.Entity.DynamicProxies.SystemUser_9F939A0933F4A8A3724213CF7A287258E76B1C6775B69BD1823C0D0DB6A88360)this).People'
threw an exception of type
'System.Data.Entity.Core.EntityCommandExecutionException' System.Collections.Generic.HashSet
{System.Data.Entity.Core.EntityCommandExecutionException}
Any ideas here? Am I doing something that Always Encrypted does not support?
Always encryption is not having support in entity framework. MS still working.
This Blog Using Always Encrypted with Entity Framework 6 explains how to use Always Encrypted with Entity Framework 6 for DataBase first and Code First From existing database and Code first-Migrations with work arounds for different scenarios and problems.
According to https://blogs.msdn.microsoft.com/sqlsecurity/2015/08/27/using-always-encrypted-with-entity-framework-6/
Pass the constant argument as closure – this will force parametrization, producing >correct query:
var ssn = "123-45-6789";
context.Patients.Where(p => p.SSN == ssn);
I have the following in Entity Framework Core:
public class Book {
public Int32 Id { get; set; }
public String Title { get; set; }
public virtual Theme Theme { get; set; }
}
public class Theme {
public Int32 Id { get; set; }
public String Name { get; set; }
public Byte[] Illustration { get; set; }
public virtual ICollection<Ebook> Ebooks { get; set; }
}
And I have the following linq query:
List<BookModel> books = await context.Books.Select(x =>
new BookModel {
Id = x.Id,
Name = x.Name,
Theme = new ThemeModel {
Id = x.Theme.Id,
Name = x.Theme.Name
}
}).ToListAsync();
I didn't need to include the Theme to make this work, e.g:
List<BookModel> books = await context.Books.Include(x => x.Theme).Select(x => ...
When will I need to use Include in Entity Framework?
UPDATE
I added a column of type Byte[] Illustration in Theme. In my projection I am not including that column so will it be loaded if I use Include? Or is never loaded unless I have it in the projection?
In search for an official answer to your question from Microsoft's side, I found this quote from Diego Vega (part of the Entity Framework and .NET team) made at the aspnet/Announcements github
repository:
A very common issue we see when looking at user LINQ queries is the use of Include() where it is unnecessary and cannot be honored. The typical pattern usually looks something like this:
var pids = context.Orders
.Include(o => o.Product)
.Where(o => o.Product.Name == "Baked Beans")
.Select(o =>o.ProductId)
.ToList();
One might assume that the Include operation here is required because of the reference to the Product navigation property in the Where and Select operations. However, in EF Core, these two things are orthogonal: Include controls which navigation properties are loaded in entities returned in the final results, and our LINQ translator can directly translate expressions involving navigation properties.
You didn't need Include because you were working inside EF context. When you reference Theme inside the anonymous object you are creating, that's not using lazy loading, that's telling EF to do a join.
If you return a list of books and you don't include the themes, then when you try to get the theme you'll notice that it's null. If the EF connection is open and you have lazy loading, it will go to the DB and grab it for you. But, if the connection is not opened, then you have to get it explicitely.
On the other hand, if you use Include, you get the data right away. Under the hood it's gonna do a JOIN to the necessary table and get the data right there.
You can check the SQL query that EF is generating for you and that's gonna make things clearer for you. You'll see only one SQL query.
If you Include a child, it is loaded as part of the original query, which makes it larger.
If you don't Include or reference the child in some other way in the query, the initial resultset is smaller, but each child you later reference will lazy load through a new request to the database.
If you loop through 1000 users in one request and then ask for their 10 photos each, you will make 1001 database requests if you don't Include the child...
Also, lazy loading requires the context hasn't been disposed. Always an unpleasant surprise when you pass an Entity to a view for UI rendering for example.
update
Try this for example and see it fail:
var book = await context.Books.First();
var theme = book.Theme;
Then try this:
var book = await context.Books.Include(b => b.Theme).First();
var theme = book.Theme;
I'm using ASP.NET WebAPI and ran into a problem with a nested model that should be communicated via a WebAPI Controller:
The entities "bond, stock etc." each have a list of entities "price". Server-side, I use the following class to match this requirement..
public class Bond : BaseAsset
{
public int ID { get; set; }
public string Name { get; set; }
public virtual List<Price> Prices { get; set; }
}
This leads to the table "Price" having a column for bond, stock etc. and, in case a price is attached to a bond, an entry in its column for bond foreign key.
The error I initially got was
There is already an open DataReader associated with this Command
I fixed that by altering the Connection String to allow MultipleActiveResultSets.
However, I feel there must be better options or at least alternatives when handling nested models. Is it, e.g., a sign for bad model design when one runs into such a problem? Would eager loading change anything?
One alternative to mars is to disable lazy loading
In your DbContext
Configuration.LazyLoadingEnabled = false;
plus when you are loading your data you can explicit load your child tables
context.Bonds.Include(b => b.Prices)
Is it possible to call a TVF in EF6 Code First?
I started a new project using EF6 Database first and EF was able to import a TVF into the model and call it just fine.
But updating the model became very time consuming and problematic with the large read-only db with no RI that I'm stuck dealing with.
So I tried to convert to EF6 code first using the Power Tools Reverse Engineering tool to generate a context and model classes.
Unfortunately the Reverse Engineering tool didn't import the TVFs.
Next I tried to copy the DBFunctions from my old Database First DbContext to the new Code First DbContext, but that gave me an error that my TVF:
"cannot be resolved into a valid type or function".
Is it possible to create a code first Fluent mapping for TVFs?
If not, is there a work-around?
I guess I could use SPs instead of TVFs, but was hoping I could use mostly TVFs to deal with the problematic DB I'm stuck with.
Thanks for any work-around ideas
This is now possible. I created a custom model convention which allows using store functions in CodeFirst in EF6.1. The convention is available on NuGet http://www.nuget.org/packages/EntityFramework.CodeFirstStoreFunctions. Here is the link to the blogpost containing all the details: http://blog.3d-logic.com/2014/04/09/support-for-store-functions-tvfs-and-stored-procs-in-entity-framework-6-1/
[Tested]
using:
Install-Package EntityFramework.CodeFirstStoreFunctions
Declare a class for output result:
public class MyCustomObject
{
[Key]
public int Id { get; set; }
public int Rank { get; set; }
}
Create a method in your DbContext class
[DbFunction("MyContextType", "SearchSomething")]
public virtual IQueryable<MyCustomObject> SearchSomething(string keywords)
{
var keywordsParam = new ObjectParameter("keywords", typeof(string))
{
Value = keywords
};
return (this as IObjectContextAdapter).ObjectContext
.CreateQuery<MyCustomObject>(
"MyContextType.SearchSomething(#keywords)", keywordsParam);
}
Add
public DbSet<MyCustomObject> SearchResults { get; set; }
to your DbContext class
Add in the overriden OnModelCreating method:
modelBuilder.Conventions.Add(new FunctionsConvention<MyContextType>("dbo"));
And now you can call/join with
a table values function like this:
CREATE FUNCTION SearchSomething
(
#keywords nvarchar(4000)
)
RETURNS TABLE
AS
RETURN
(SELECT KEY_TBL.RANK AS Rank, Id
FROM MyTable
LEFT JOIN freetexttable(MyTable , ([MyColumn1],[MyColumn2]), #keywords) AS KEY_TBL
ON MyTable.Id = KEY_TBL.[KEY]
WHERE KEY_TBL.RANK > 0
)
GO
I was able to access TVF with the code below. This works in EF6. The model property names have to match the database column names.
List<MyModel> data =
db.Database.SqlQuery<MyModel>(
"select * from dbo.my_function(#p1, #p2, #p3)",
new SqlParameter("#p1", new System.DateTime(2015,1,1)),
new SqlParameter("#p2", new System.DateTime(2015, 8, 1)),
new SqlParameter("#p3", 12))
.ToList();
I actually started looking into it in EF6.1 and have something that is working on nightly builds. Check this and this out.
I have developed a library for this functionality. You can review my article on
UserTableFunctionCodeFirst.
You can use your function without writing SQL query.
Update
First of all you have to add reference to the above mentioned library and then you have to create parameter class for your function. This class can contain any number and type of parameter
public class TestFunctionParams
{
[CodeFunctionAttributes.FunctionOrder(1)]
[CodeFunctionAttributes.Name("id")]
[CodeFunctionAttributes.ParameterType(System.Data.SqlDbType.Int)]
public int Id { get; set; }
}
Now you have to add following property in your DbContext to call function and map to the property.
[CodeFunctionAttributes.Schema("dbo")] // This is optional as it is set as dbo as default if not provided.
[CodeFunctionAttributes.Name("ufn_MyFunction")] // Name of function in database.
[CodeFunctionAttributes.ReturnTypes(typeof(Customer))]
public TableValueFunction<TestFunctionParams> CustomerFunction { get; set; }
Then you can call your function as below.
using (var db = new DataContext())
{
var funcParams = new TestFunctionParams() { Id = 1 };
var entity = db.CustomerFunction.ExecuteFunction(funcParams).ToList<Customer>();
}
This will call your user defined function and map to the entity.