Entity Framework Core Data Type Conversion differences between SQLite vs SQLServer - entity-framework-core

I have a SQLite and SQL Express databases both of which have a table with the columns as below:
my simplified Entity look as below:
public class Customer
{
public string BusinessIdentifier { get; set; }
}
if you notice the datatype is different between the Database bigint vs string on my entity for an example.
I have used a Fluent API to do the mapping as shown below:
entity.Property(p => p.BusinessIdentifier).HasColumnName("CUSTOMER")
on the SQLite when i use options.UseSqlite(connectionString); this work just fine. For SQLite connectionString="Data Source=my_db.db"
however when I use SQL Server Express using options.UseSqlServer(connectionString); it starts to give me errors on the type mismatch.
I have to explicitly handle this conversion on the Fluent API as below:
entity.Property(p => p.BusinessIdentifier).HasColumnName("CUSTOMER").HasConversion(v => Convert.ToInt64(v), v => v.ToString());
SQL Server connectionString="Data Source=my_machine\SQLEXPRESS;Initial Catalog=my_db;Integrated Security=True;"
Question:
Can someone please explain why is this difference between the 2 types of databases and is it really needed to be so specific in every case?
Regards
Kiran

SQLite does not force for data type constraints, It allows you the store a value which is of different data type, so this might be the reason that your code works fine with SQLite, On the other hand, SQL Server enforces you to have to the same datatype.
You can refer to this doc https://www.sqlite.org/datatype3.html.

Related

EF Core: filter on converted column [duplicate]

Suppose I want to enhance a model like
public class Person
{
public string Name { get; set; }
public string Address { get; set; }
}
so that I use a complex object for Address:
public class Person
{
public string Name { get; set; }
public Address Address { get; set; }
}
EF Core is quite nice in allowing this with the HasConversion feature:
modelBuilder.Entity<Person>
.Property(p => p.Address)
.HasConversion(addr => addr.ToString(), str => Address.FromString(str));
I even tested and this works in queries with == operator: The following will successfully convert to SQL
var whiteHouse = Address.Parse("1600 Pennsylvania Avenue NW");
var matches = from person in people
where person.Address == whiteHouse
select person;
However, suppose I want to string.Contains on the string version of Address, something like
var search = "1600";
var matches = from person in people
where person.Address.ToString().Contains(search)
select person;
This will fail to convert. Is there any feature of EF Core to map the ToString() method or otherwise map a complex object that converts to a string / VARCHAR so that I can write a query like this?
The problem with EF Core value converters and LINQ queries is that the LINQ query is against the CLR entity property, hence the CLR type rather than the provider type. This is partially mentioned under currently Limitations of the value conversion system section of the EF Core documentation:
Use of value conversions may impact the ability of EF Core to translate expressions to SQL. A warning will be logged for such cases. Removal of these limitations is being considered for a future release.
So having query expression against the CLR type combined with the inability to translate custom methods is causing your issue. Technically it's possible to add custom method/property translation, but it's quite complicated because requires a lot of non user friendly infrastructure plumbing code, which makes practically unusable in real life application development.
In this particular case though, you know that the provider type is string, and the database table values are generated by ToString method. So you just need to let the query use the provider type. And you can do that by using cast operator.
Normally C# compiler won't allow you to cast known object type to another known object type if there is no conversion between them. But you can trick it by using the "double cast" technique by first casting to object and then to the desired type. Fortunately EF Core translator supports such casts and properly (sort of) translates them to SQL. By sort of I mean it emits unnecessary (redundant) CAST inside the query, but at least it translates and executes server side.
With that being said, the solution for your example is
where ((string)(object)person.Address).Contains(search)
As a default behavior, EF Core use Server-side evaluation, EF Core try to translate your expression to standard DB provider T-SQL code (based on selected DB provider)
you expression can't translate to T-SQL code and the DB provider can't handle it (Because the logic you write in the overridden version of ToString() is in your C# code and is unknown to the database provider)
You should force EF Core to use client-side evaluation by fetching all data to memory and then query on the loaded entities, something like this:
var search = "1600";
var matches = from person in people.ToList()
where person.Address.ToString().Contains(search)
select person;
Note that fetching all data to memory in huge databases have performance impacts and use client-side evaluation carefully.

Npgsql data type mapping of Character from Postgresql to .NET Core 2.0

In a PostgreSQL database I have a RFC column which is sort of a code to identify Enterprises and people in Mexico (for taxes purposes), code that I need to store in my database. The format of this 'code' it's like the next one:
AAAXXXXXXAXX -> where A's are letters and X's are numbers.
I want to store RFC column as primary key. As far as I've searched, Postgres Character data type is good for this, and I have the SQL query of it as pgAdmin4 generates:
rfc character(13) COLLATE pg_catalog."default" NOT NULL,
CONSTRAINT pk_empresas PRIMARY KEY (rfc)
But, inside Visual Studio using Package Manager Console and the next command:
Scaffold-DbContext "Host=localhost;Database=database;Username=pgadmin;Password=xxxx" Npgsql.EntityFrameworkCore.PostgreSQL -force
It generates my models that are mapped as the tables in my database.
The question here is, how can I work correctly if .NET char datatype only holds a single character and the property RFC is generated as follows?
public char Rfc { get; set; }
This first approach stores only the first character. I can see it in pgAdmin4
Database record saved
I've tried to change Rfc property data type(as I know that some .NET data types can match to others in PostgreSQL as we can see in the next link Npgsql Supported Types ) as string like this:
public string Rfc { get; set; }
But this Table is also related to another 4 or 5 Tables in my Database, and I get too many errors when I try to change the data type of this property in my model (as it is also related in models).
I have to say that I have tried this but it throws an exception.
Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while updating the entries.
Edit 1:
I'm using Npgsql.EntityFrameworkCore.PostgreSQL version 2.0.1
This has been fixed for version 2.0.2 of the Npgsql EF Core provider which will be released very soon.
See https://github.com/npgsql/Npgsql.EntityFrameworkCore.PostgreSQL/issues/370 for the github issue.

Slow linq query when looking for char(1) datatype using contains

I've got an old database with a char(1) Status column. I'm using code first and entity framework 4.3.1 to interact with my database. My Status column in my code first class is defined as follows:
[Column(TypeName = "char")]
public string Status { get; set; }
I'm writing a linq query to fetch all items with a Status of one of several values. The code looks something like this (although it's been simplified):
List<string> statusList = new List<string>() {"N","H","P"};
...
var entries = (from t in context.MyTable where statusList.Conains(t.Status)).ToList();
...
The SQL thats generated prefixes all the Status values with N making the query quite slow.
WHERE ([Extent1].[Status] IN (N'N', N'P', N'P'))
It seems to be because it's comparing unicode with non unicode so it can't use the index on the Status column.
I've had similar problems before in other linq queries, but I thought they were solved by putting [Column(TypeName = "char")] on the property.
Does anyone know how I prevent SQL from putting those N's in front of all my Status values? I tried making statusList a List of char, but then I needed to change the definition of the Status column to char in code first, but that threw errors.
Thanks
David
Are you on .NET Framework 4? I think this was fixed in EF5 core libraries shipped with .NET Framework 4.5. Here is a connect bug for this: http://connect.microsoft.com/VisualStudio/feedback/details/709906/entity-framework-linq-provider-defaulting-to-unicode-when-translating-string-contains-to-like-clause The connect bug also contains a workaround - use EntityFunctions.AsNonUnicode() function to force strings not to be Unicode which may be helpful if you can't move to .NET Framework 4.5

Entity Framework - Mapping decimal(13,0) problem

I'm migrating the aplication of my company (that nowadays run over SQL Server and Oracle) to ASP NET MVC and Entity Framework for persistence.
A create my Entity Model based on SQL Server Database e separately I create a SSDL for Oracle (for Oracle I use DevArt dotConnect for Oracle Provider) and I get some pain troubles.
My table primary keys are on SQL Server are of type decimal(13,0) and on Oracle are number(13,0) but Oracle map it's type to Int64 and SQL Server to decimal, but I need that SQL Server map it to Int64.
I make these modification manually on Entity Data Model and for create records it's works fine, but when I have to delete or update some record I got these error:
The specified value is not an instance of type 'Edm.Decimal'
Parameter name: value
at System.Data.Common.CommandTrees.DbConstantExpression..ctor(DbCommandTree commandTree, Object value, TypeUsage constantType)
at System.Data.Mapping.Update.Internal.UpdateCompiler.GenerateValueExpression(DbCommandTree commandTree, EdmProperty property, PropagatorResult value)
at System.Data.Mapping.Update.Internal.UpdateCompiler.GenerateEqualityExpression(DbModificationCommandTree commandTree, EdmProperty property, PropagatorResult value)
at System.Data.Mapping.Update.Internal.UpdateCompiler.BuildPredicate(DbModificationCommandTree commandTree, PropagatorResult referenceRow, PropagatorResult current, TableChangeProcessor processor, Boolean& rowMustBeTouched)
at System.Data.Mapping.Update.Internal.UpdateCompiler.BuildDeleteCommand(PropagatorResult oldRow, TableChangeProcessor processor)
at System.Data.Mapping.Update.Internal.TableChangeProcessor.CompileCommands(ChangeNode changeNode, UpdateCompiler compiler)
Someone can help me?
Why Entity Framework mapping are so fixed? It could be more flexible?
Ps.: The error that I got, I suspect that is because of a association.
I have a Entity named Province and another named Country and I think that the association between these Entities are causing the problem at update and delete.
Regards,
Douglas Aguiar
This may or may not help you, but i had the same error from doing this same thing. So I edited the Conceptual model and change the primary key field from Int32 to Decimal. So far, seems to have fixed things. I still need to test again against Sql Server and make sure this didnt break it.
I was getting the error "The specified value is not an instance of type 'Edm.Decimal' Parameter name: value" as you posted in your question. I had changed the default data types from Decimal to Int32 as this better reflects the true typing. When I first hit this error I rolled back the type changes and was still getting an exception but it changed just slightly but led to further digging. Bottom line, in my scenario we were expecting a trigger to populate the PK during persistence via Before Insert directive. The problem was that the domain class built by EF was setting the PK at 0 so the trigger was never firing as the incoming PK was not null. Of course EF will not let you set the Entity PK to be nullable. Maybe this will help someone else in the future.

Server-generated keys and server-generated values are not supported by SQL Server Compact

I just started to play with the entity framework, so I decided to connect it to my existing SQL Server CE database. I have a table with an IDENTITY(1, 1) primary key but when I tried to add an entity, I've got the above-mentioned error.
From MS Technet artice I learned that
SQL Server Compact does not support entities with server-generated keys or values when it is used with the Entity Framework.
When using the Entity Framework, an entity’s keys may be marked as server generated. This enables the database to generate a value for the key on insertion or entity creation. Additionally, zero or more properties of an entity may be marked as server-generated values. For more information, see the Store Generated Pattern topic in the Entity Framework documentation.
SQL Server Compact does not support entities with server-generated keys or values when it is used with the Entity Framework, although the Entity Framework allows you to define entity types with server-generated keys or values. Data manipulation operation on an entity that has server-generated values throws a "Not supported" exception.
So now I have a few questions:
Why would you mark key as server-generated if it is not supported and will throw an exception? It's hard to make sence from the quoted paragraph.
When I've tried to add StoreGeneratedPattern="Identity" to my entity's property, Studio complained that it is not allowed. What I'm doing wrong?
What is the best workaround for this limitation (including switching to another DB)? My limitations are zero-installation and using entity framework.
When I hit this limitation, I changed the type to uniqueidentifier
Use uniqueidentifier or generate a bigint/int key value manually is your best option.
Something like this perhaps ...
private static object lockObject = new object();
private static long nextID = -1;
public static long GetNextID()
{
lock (lockObject)
{
if (nextID == -1) nextID = DateTime.UtcNow.Ticks; else nextID++;
return nextID;
}
}
This assumes that you don't generate more than one record per tick during an application run (plus the time to stop and restart). This is a reasonable assumption I believe, but if you want a totally bullet proof (but more complex) solution, go read the highest ID from the database and increment from that.
SQL CE version 4.0 fixed this problem with its Entity Framework provider.
I just hit this issue too... mostlytech's answer is probably the best option, GUIDs are very easy to use and the risk of key collision is very low (although not inexistant).
Why would you mark key as server-generated if it is not supported and will throw an exception? It's hard to make sence from the quoted paragraph.
Because SQL Server (not Compact) supports it, and other third parties may support it too... Entity Framework is not only for SQL Server Compact ;)
In my case, all of my classes have the primary key named "ID"
I created an interface
public class IID
{
public Int32 ID { get; set; }
}
Then I create an extension method
public static Int32 GetNextID<T>(this ObjectSet<T> objects)
where T : class, IID
{
T entry = objects.OrderByDescending(u => u.ID).FirstOrDefault();
if (entry == default(T))
return 1;
return entry.ID + 1;
}
Then when I need a new ID, I just do this:
MyObject myobj = new MyObject();
myobj.ID = entities.MyTable.GetNextID();
the other option is to use SqlCeResultSet on the tables that have the identity column.
i have a primary key named ID with data type of INT32 and have Identity Column
Just do this
MyEntity Entity = new MyEntity();
String Command;
command = "Insert into Message(Created,Message,MsgType)values('12/1/2014','Hello World',5);
Entity.ExecuteStoreCommand(command);
--Exclude the primary key in the insert Statement
--Since the SQLCE do not support system generated keys
--Do not use LINQ because it supplies a default value to 0 for Primary keys that has a
data type of INT